"Involved over 100 teams". Wonder what the cost of that migration was, and what the ROI might be.
Amazon has a history of taking the long view about ROI, but for the rest of us, cost-of-change is a very big consideration.
Amazon has turned off its final Oracle database, completing a migration effort that has involved "more than 100 teams" in the consumer biz. Amazon's cloudy unit, AWS, regularly takes a pop at enterprise database vendors while promoting its own Relational Database Service (RDS), which offers Aurora (MySQL and PostgreSQL …
Unfortunately, dropping Oracle usage in small increments over time doesn't result in matching savings. You have the cost of delivering and supporting the new RDBMS while Oracle license costs look exactly the same year on year as usage drops by 10/25/40%.
Oracle licensing may be predatory and avaricious but...actually there is no 'but'.
It shouldn't really be does as a black-ops project but it might have to start as such (cf. how YouTube convinced the internet to stop using Internet Explorer 6) with a proof of concept. I suspect that the vast majority of any company's databases does nothing special and could migrated to another system with a dump/load. The rest will, of course, take research, planning and costing but the savings can quickly become substantial once you stop having to buy licences for everyone in the company on all their devices.
If Oracle is required for "mission critical" stuff then work around it. Once you can demonstrate functional equivalance without the licensing costs for some projects, it should be possible to draw up a high-level analysis that quotes Amazon as having done the same thing already. Going to the "cloud" will probably be mentioned and any references to this new form of lock-in are likely to fall on deaf ears, because "cloud" sound a lot like "lower headcount" to a PHB.
Note, as soon as Oracle finds out that something like this is in the offing they'll launch the sales droids with FUD bombs to try and frighten people off, so you must have functional equivalence for backup/restore, etc. But, also, once you've moved one proper application you can also start asking for rebates.
At the end of the day, there should be nothing wrong in keeping some stuff in Oracle if it really is the best tool for the job. The problem is that Oracle needs to learn that that is what they have to provide and not Faustian contracts.
It's not necessarily about money but strategy and policy. Being locked in is not a good position. What is learned in the process of migration maybe more valuable than the cost. Propitiatory solutions can mean an advantage but when I see statements such as "AWS does not have database technology as capable as ours" it assumes that Amazon is even using those capabilities.
I did a Informix to Postgres conversion for a branch of our state government.
IBM wanted us to continue to run Informix on a Mainframe at a cost of about 5,000.00 USD per month,
or by a 'Core' VMWARE license for 5000,000 - 1,000,000 one time plus maintenance.
It took me about a year to do the conversion and testing. About 230 tables converted, 80 functions and 175 triggers.
Did most of the work writing python programs to convert about 70 - 80% of the stuff for me.
Hope this gives you an idea.
From what the forums say, converting from Oracle to Postgres is not that big of a deal ..
The commas are a little confusing on the VM Ware licences.
Did you spend you spend all your time doing this? In which case it's presumably savings after the first year or so?
Oracle -> PG is tried and tested and EnterpriseDB means you can keep most of your "stored procedures" aka Oracle lock-in bombs.
Sorry, heads down in writing python programs to generate Web pages from Postgres tables for FLASK.
Anyway, it was 500,000 USD to 1,000,000USD (Half to a full Million) depending on what options we wanted.
They even quoted a power based appliance instead of the VMWARE license for close to the high figure ....
Had to go back to our intervals timeline ..
Started POC conversion Jan 2016, finished March 2016 (Includes writing conversion programs and scripts (90 days of about 6 hours a day).
Ran testing conversion August 2016 - April 2017, which included testing BY Q&A (off and on - a lot of testing done by Q&A)
Converted our first court instance May 2017. Completed all conversions of 20 existing courts by December 2017.
So, it would be safe to say 1 person converted a enterprise system to Postgres by themselves in less than a man year
Most of the work I automated with python (Hey, I'm a lazy programmer / DBA - what can I say ...)
500,000 USD to 1,000,000USD
Equates to around 2 to 4 top-notch DBAs for a year so you were saving money from the start. Of course, a full analysis would cover what compromises (say reports), if any, you had to make and whether you got any additional benefits.
Most of the work I automated with python (Hey, I'm a lazy programmer / DBA - what can I say ...)
But that's the best kind. The beancounters love automation but they often don't understand the kind of people it takes to get it.
Amazon has a history of taking the long view about ROI, but for the rest of us, cost-of-change is a very big consideration.
Amazon had a major PR motive in eating their own dog-food, and quickly. If you just wanted to get off of Oracle quickly, you can do a more modest conversion to EnterpriseDB, with its Oracle PL/SQL compatibility layer (on top of PostgreSQL).
...is that, back in the 1980s when Oracle was a relatively small new company struggling to expand and make a name for itself, it competed against the likes of IBM and DEC by stressing its portability.
I remember DEC sales support people complaining how hard it was to persuade customers to buy Rdb when Oracle salesmen kept arguing, "What if you decide to move off VAX/VMS? Oracle runs on 43 (or whatever) platforms".
We DECcies all thought the joke about how Oracle runs best on a slide projector hilarious, but it didn't cut much ice with the punters. So zillions of customers bought Oracle, secure in the knowledge that they could easily switch OS and hardware whenever they wanted to.
And about 5 percent of them ever did.
Old school guy here. Once upon a time we did software product evaluation matrixes that included technical support, cost of ownership(including testing environments) and vendor pricing reputation.
Experienced hands made few mistakes. Nowadays management sorts use Magic Circle Gartner reports to pick winners - or have some consultancy to make a recommendation - that had no financial consequences for them. Maybe only Walmart and Amazon fire those responsible for negative ROI outcomes.
Then Microsoft invented TOC, only cost of ownership, that never included yearly licence fee hikes, and optimum factors that worked for their marketing hype. But experienced evaluation people got the flick, as salesdroids targeted the decision makers with a budget. Game over.
Then Adabas/Natural DB started to Oracle their remaining declining customer base. One manager coined the expression bushranger tactics. IBM Mainframe users were astounded by vendor aggression. Most never bickered over price increases, when capacity management experts were made redundant.
Back to Oracle. Their tools for emergencies and business restoration were bullet proof. That won them business over DB2. People buying MS SQL Server never thought that far ahead. Then Oracle stated to do a Software AG trick - antagonise their reference sites.
Then came the Cloud - AWS and Cloudtastrophies. My tip to new players is never buy a product that allows auditors to set foot on site or steal your usage numbers. Greed never changes, so pick solutions where blackmail is less likely. OpenSource spinoffs are reliable enough.
If vendors won't licence or work with AWS, avoid them and pick another.
If you don't scale much, proprietary software can make sense, but you really need to be clear on requirements rather than dev preferences.
I did have to laugh about AWS railing against lock-in. Hello Pot! Nice of you to comment on that whistling sound!
If you want to save money, manage your asset inventory well and make the right thing to do, the easy thing to do.
That's a nice AWS setup you've got there, would be a pity if something were to happen to it.
All the cloud vendors have a vested interest in making it easy to check in but difficult to leave their data hotels.
And their engagement with open source is often only as far as it serves their narrow, commercial requirements.
SQL Server is still playing catch up with Oracle and DB2, which is why it can afford to / has to be cheaper. To some it's still the "new kid on the block" but if you look at the cost of licences for other MS "enterprise" products, they have a distinctly "Oracle" feeling about them.
At the 2018 Re:Invent conference in Las Vegas, AWS CEO Andy Jassy said: "The world of... the old-guard commercial-grade databases has been a miserable world for the last couple of decades for most enterprises... Databases like Oracle and SQL Server are expensive, high lock-in and proprietary."
I thought something similar: unlike Amazon Prime, for example.
We lock you in but we don't wanted to be locked in. Apparently it's not a good thing (unlike Amazon Prime).
This sounded like a pissing match between some big egos. In the end Amazon still uses Oracle for some of its functions. Most of us don't have the luxury of unlimited resources including capital and labor to make the transition. We can't find the skills to have people run open source databases and AWS "services" and if we do we certainly can't keep them.
I'd rather lock in to a company that has a proven track record, a huge labor pool, and enterprise class support versus a company that tells you to just build it. As the old saying goes "If it ain't broke..."
I'd rather use literally anyone than Oracle. If only we were able to score products down on a procurement agreement for using an Oracle backend, unfortunately we can't so there is always a risk that it will sneak in. However for a pure DB backend that we specify, I am not going to steer down the Oracle Canal again.
This post has been deleted by its author
As a former Oracle developer who has transitioned to MS SQL Server, I can tell you that Oracle is broken. (To be fair, I have a nagging suspicion that all major RDBMS products are broken in quite significant, soul-crushing ways, I just haven't risked my sanity by testing it yet.)
The decision to initially use Oracle is where the mistake lies.
As more and more people realise this, less and less people will use it.
It's all very well saying "but it's too expensive to change now", but it's literally never going to get any cheaper... it's as simple as that. You will have to pay that transition charge at some point, as well as the higher annual subscriptions and whatever other costs in the meantime.
Sure, it's not easy or cheap to up sticks and leave, but it won't be any cheaper tomorrow either. Continuing down that path is how people end up in bankruptcy, companies in administration, and banks clinging to outdated technology "because it's too old to change now and we can't find people to do it".
Oracle is a stupid decision. It's that simple. If you're tiny, it's *always* been stupid. If you're mid-size, it may have been understandable at one point but otherwise stupid. If you're seriously large - as this shows - it was still stupid.
You can continue to propagate the stupid decision into the future with excuses, or you can say "it was stupid, and now we're trapped" and realise that one day you'll have to do the exact same thing people are telling you to do now anyway.
I honestly judge any company whose IT department happily use, recommend or support Oracle.
Interestingly, it very clearly states in the article:
AWS evangelist Jeff Barr reports ... migrated to AWS database services.
If we look at AWS database services we find:
Relational Database Service (RDS), which offers Aurora (MySQL and PostgreSQL compatible), PostgreSQL, MySQL and MariaDB, as well as Oracle and SQL Server.
So it would seem that what has happened is that Amazon the company are no longer running Oracle on their private internal IT systems. Instead they are using their public AWS database services and paying accordingly. Obviously, whilst doing this some DB's were migrated away from Oracle.
Does any one have a breakdown (ie. numbers) of which AWS database service those "nearly 7,500 Oracle databases" were migrated to?
Most of us don't have the luxury of unlimited resources including capital and labor to make the transition.
Amazon had a major PR motive in eating their own dog-food, and quickly. If you just wanted to get off of Oracle quickly, you can do a more modest conversion to EnterpriseDB, with its Oracle PL/SQL compatibility layer (on top of PostgreSQL).
Most programs just use some standard database connector library that can easily be swapped out from Oracle to PostgreSQL in a few lines of code in one spot, and a parallel database dump from one and import into the other.
We can't find the skills to have people run open source databases and AWS "services" and if we do we certainly can't keep them.
Oracle databases don't just run themselves... They need 100X more expertise, optimization, care and feeding than user-firendly open-source databases like PostgreSQL or MySQL/MariaDB. I know this from years of miserable experience with some big ones... No doubt you can find a consultant to setup and maintain your open source database product for a fraction the cost of your oracle license fees, not to mention your oracle consultant's fees.
>but is it smarter? Or cheaper?
Its all relative...
If the business already has a load of XYZ experts and management just want the system in then typically it is wise to utilise those experts and implement using XYZ; your project is more likely to achieve its deadlines and budget targets than would otherwise be the case.
The fun and games start when a company for various reasons have two (or more) camps, such as IBM mainframe DB2 and Unix/Oracle and you work for an integrator with a team of dev's primarily skilled in MS SQL-Server...
Still a long way to go but after 20 years as an Oracle DBA, I'm really digging Postgres, it's very familiar in feel and a lot of PG's features map nicely to those in Oracle. At the very least Postgres beats SQL Server hands down for concurrency, SQL Server is an object locking festival on speed, I've never worked on a DB like SQLSvr for such draconian locking. And lLet's not even get started with "(nolock)" that every dev thinks is so great, yeah until you commit to using the data you got off those dirty-reads and the base trx gets rolled back!!
Welcome aboard! The familiarity is, of course, no really a coincidence as back in the mists of time they were both based on the same project at Berkeley.
And I think experienced Oracle DBAs are going to help make Postrges even better. In the last 10 years it's got so much better largely due to the input (both in source code and comments) of seasoned industry professionals.
By 'marinating', I take you mean the BOFH-sense of the word.
Reginald D. Hunter commented to a crowd in England (although it could have been Ireland) that they drank the way Americans ate. Do Americans pre-eat before they go to a restaurant?
Performance doing what precisely? Postgres is generally considered to have excellent support for concurrency but is acknowledged to have not the best write performance, though changes in the last few releases have seen significant improvements there.
But in any particular test (OLAP/OLTP) it's going to come more down to the ability of the DBA to configure the system correctly than any inherent DB features: if you don't know what they are or how to control them, they aren't going to help much.
Really you know nothing. SQL server transaction isolation level can be set to no locking if you want, does postgress, oracle have columnstore indexes, always in memory tables if you wish? How's the query optimisation btw? I can continue with tabular design
Database customers living in "a miserable world for the last couple of decades"?
Most of what I know about databases (which is not much) I learned while working with one specific ERP application as it has evolved.
It started out with Unidata (HPUX) in the 80's, moved to OpenEdge(Windows) in the 90's and finally switched to MS SQL around 2010.
Along the way I formed my very subjective opinions about each..
- Unidata was the hardest to use but use of disk space was efficient, it could scale up and still run fast, required little maintenance and licensing was simple.
- OpenEdge was a little easier to use, took up more disk space, ran slower, was low maintenance with straightforward licensing.
- MS SQL easiest to use, more disk space and slower, higher maintenance and licensing is a real pain. (and more likely to run into weird issues).
Where generally I like MS SQL while I'm working with data, as an end user.. but as an admin, dislike everything about it.
On the other hand, I really appreciated the ease of maintaining OpenEdge, and using it was "somewhat acceptable".
When looking back on Unidata today, I'm in awe of it's speed/efficiency... and I wonder "what if" it had just been a little easier to use?
I started out on Pr1me Information (a PICK implementation) which we migrated to UniVerse (UniData's current sibling) on SunOS by exporting and importing. It worked instantly without any changes at all.
Bloody brilliant database and much easier to use than Oracle/SQL Server/DB2.
They are using as much free / open source as possible and giving what back?
I'm sure there have been several articles on the Reg and Ars etc discussing their packaging up of code and selling it as a service.
Maybe / hopefully some of the new licenses that Elastic etc are using will stem the flow but it doesn't look good for the startups trying to compete.
I have made a total of 2 code pull requests (minor translation issues for a language that has less that 1M speakers), one was accepted the other was accepted after an alteration I'm crap at coding but thought I should try to help. If I though this was going to help amazon I probably wouldn't have bothered.
Good one sticking it to Oracle but are Amazon any better?
Oracle itself will be on an accelerating shrinking path because the people who are so stuck on not changing it are also on the wane. At 75, Larry needs to find an out. He needs legacy that extends beyond a basketball naming rights deal on a shiny new toy building in S.F.
He needs a winning cloud strategy - but, wait! - he's already lost that race and can never recover. Karma is a hard Mother when you have been auditing your customers and sticking it to them for years. The old capture and exploit routine because Larry knows it's hard to break free - but it's getting easier by the day.
What to do?
Get Microsoft to buy you. It's his only face-saving out.
Also, best wishes to Mark Hurd and I mean that. He's having a very tough go on the health front.
"Oracle first, SQL Server second and AWS third"? I can't believe I'm hearing this. Where's DB2? And I guess that's not counting all the people who are using Postgres, Maria, MySQL, or whatever and not having to pay money to anyone, except for hardware. That Oracle - and IBM, for that matter - might have capabilities that are far ahead of anything either Amazon or open-source software can provide, though, is quite possible, but not everyone who needs a database needs those capabilities.
We had an agency in Hamilton County that bought into Oracle's promises and migrated everything over to it. Too bad Oracle was not compatible with the rest of the county's financial and personnel systems. Plus, as soon as one their Oracle oracles got proficient, they jumped ship for 50% to 100% more pay, leaving the agency in constant turmoil. Personally, I'd never use the cloud for anything other than something for airplanes to fly thru or bring precipitation, but I'd gladly switch to open source products. Beside, Larry Ellison is already a billionaire several times over.