back to article Database from the 1980s needs time travel says author

PostgreSQL co-creator and MIT computer science professor Michael Stonebraker has listed his top requests for features to add into the popular open-source database, including a time travel function he admits was implemented badly in the 1980s. Speaking at Postgres Vision conference, Stonebreaker said the time-travel code was …

  1. b0llchit Silver badge
    Alien

    You better be careful what you wish for. Time travel is dangerous business.

    Just imagine this: SELECT sum(profit) FROM tomorrow(sales) INTO pocket;

    I would not mind if that was my pocket and the sum of profits were positive. But an SQL injection flaw can easily make us all (or just me) poor (again) using time travel code.

  2. emacs-enjoyer

    Time travel is one of the key features of datomic

  3. Jedit Silver badge
    Joke

    Time travel?

    I surely can't be the only person who read the title and pictured Christopher Lloyd turning up in a DeLorean and saying "Marty, you need to come with me back to 1985! It's your database!"

    1. deadlockvictim

      Re: Time travel?

      My first thought was, the problem with time travel back in 1995 was how to get 1.21 jigawatts of power.

  4. Anonymous Coward
    Anonymous Coward

    > “It allows you to fix user application mistakes..."

    Actually it's way more useful than that. In particular in the analytics domain it allows you to see how a piece of analysis - be a report, regression. machine learning model or whatever - behaved at specific points in time. This is super useful for building high quality analytics. You can of course achieve this with something like Type 2 SCD, but that requires a level of foresight and fluency with SQL many people lack. Time travel meanwhile is usually "free" to maintain (e.g. it's a byproduct of mvcc), dead easy to interrogate with "AS OF" SQL clauses, and only marginally expensive for actual use.

    >"Column stores are just more efficient than row stores when you're only searching a few fields..."

    Column stores are just more efficient full stop. At least when you're talking about reads. They're slow for writes (cf Snowflake's newly announced row-centric engine for transactions), but the ability to pack identically typed values into runs of columns significant improves compression performance, and also enables encoding schemes like dictionary encoding and run-length encoding. You can then couple that with clever techniques like zordering on write for scan/seek performance that make traditional indexes-on-rows look like a clapped out volvo from the 1950s.

    Or in other words Postgres should almost certainly just adopt Parquet as its column store and be done with it.

    1. MrRimmerSIR!

      "Column stores are just more efficient full stop. At least when you're talking about reads. They're slow for writes (cf Snowflake's newly announced row-centric engine for transactions), but the ability to pack identically typed values into runs of columns significant improves compression performance, and also enables encoding schemes like dictionary encoding and run-length encoding. You can then couple that with clever techniques like zordering on write for scan/seek performance that make traditional indexes-on-rows look like a clapped out volvo from the 1950s.

      Or in other words Postgres should almost certainly just adopt Parquet as its column store and be done with it.

      "

      Or get the best of both worlds, use SQL Server and have the current table configured with standard indexes and the history table as a clustered columnstore. No need to move data to multiple locations.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like