
Re: There is no such thing as magic
When I first designed a core banking database for millions of transactions per day, special care was needed to the design indexces, segment size and extends and partitioning to match CPU and volume count, with extensive testing for /*+.. */ query hints to use hash and temp for sorting and scheduler optimisation to avoid parallel jobs using buffer pools at the same time.. doing that today would be largely redundant, but legacy DBA procedures have not moved on.
Snowflake might have a great UI, but the biggest advances are due to simply updating application and using instrumentation in database: when a $50k server can store all data in optane memory with hundreds of CPU cores, there is a case for moving analytics back to operational database and avoiding almost all of the data-warehouse use-cases. When Teradata launched (with i386 AMP), it was a step-change in performance, but the biggest (multi-million) DBC/1012 then, is less capable then a commodity laptop today..
Great technology, but the real competitor is that $50k box as a VDI running {Tableau, PowerBI, Qlikview}