> “It allows you to fix user application mistakes..."
Actually it's way more useful than that. In particular in the analytics domain it allows you to see how a piece of analysis - be a report, regression. machine learning model or whatever - behaved at specific points in time. This is super useful for building high quality analytics. You can of course achieve this with something like Type 2 SCD, but that requires a level of foresight and fluency with SQL many people lack. Time travel meanwhile is usually "free" to maintain (e.g. it's a byproduct of mvcc), dead easy to interrogate with "AS OF" SQL clauses, and only marginally expensive for actual use.
>"Column stores are just more efficient than row stores when you're only searching a few fields..."
Column stores are just more efficient full stop. At least when you're talking about reads. They're slow for writes (cf Snowflake's newly announced row-centric engine for transactions), but the ability to pack identically typed values into runs of columns significant improves compression performance, and also enables encoding schemes like dictionary encoding and run-length encoding. You can then couple that with clever techniques like zordering on write for scan/seek performance that make traditional indexes-on-rows look like a clapped out volvo from the 1950s.
Or in other words Postgres should almost certainly just adopt Parquet as its column store and be done with it.