I am an Amazon Redshift specialist, and I have Views about all this.
Bona fides (and a bit of self-publicity) : I maintain a web-site, where I publish white papers of investigations into Redshift, and maintain ongoing monitoring of Redshift across regions; https://www.amazonredshiftresearchproject.org
I may be wrong, but I think I know more about Redshift than anyone outside the RS dev teams. I've spent the last five years investigating Redshift, full-time.
Redshift is basically a vehicle for sorting, which is to say, for having sorted tables, rather than unsorted tables.
It is this method, sorting, which when and only when correctly used, allows timely SQL on Big Data.
You also get the cluster (as opposed to a single node), but that's a secondary method for improving performance - it doesn't do anywhere near as much as correctly operated sorting, there are sharp limits to the cluster size, a few behaviours actually slow down as cluster size grows (commits, for example), and it costs a lot of money.
There are two key problems with sorting.
First, it makes data design challenging. When you define your tables, you also define their sorting order, and only queries which are appropriate to the sorting orders you have will execute in a timely manner; so when you make your data design, and so pick your sorting orders, you are defining the set of queries will can execute in a timely manner and also the set which *cannot*.
Your job is to make it so all the existing queries, and the know near-future queries, and the general medium term future queries, and going to be in the set of queries which *can* execute in a timely manner. (In the end, after enough time, there will be enough change that your data design must be re-worked.)
This issue, getting the data design right, is a *complete* kicker. It's usually challenging, and it's an art, not a science, and - critically - it's not enough for the *devs* to know how to get this right. Once the design has been made, it must also be *queried correctly*, which means the *USERS* also have to know all about sorting and how to operate it correctly; if they issue queries which are inappropriate to the sorting orders in the data design, pow, it's game over - you are *not* going to get timely SQL, and the cluster will grind to a halt.
So Redshift is a knowledge-intensive database, for both the devs and the users; it's not enough to know SQL. You need to know SQL, and Redshift, and that's problematic, because AWS to my eye publish no meaningful information about Redshift.
Where operating sorting correctly imposed a range of constraints and restrictions upon the use of Redshift, is a quite narrow use-case database; it is NOT, absolutely not, a general purpose database, in any way, shape or form.
The second problem is VACUUM; which is to say, data in Redshift is either sorted, or unsorted. New data almost always is unsorted, and it has to be sorted, by the VACUUM command. However, you can only have *one* VACUUM command running at a time, *PER CLUSTER*. Not per table, not per database, but *per cluster*. So you have a budget of 24 hours of VACUUM time per day; that's it.
Redshift - like all sorted databases - faces a producer-consumer scenario. New incoming data is producing unsorted blocks (all data in RS is stored in 1mb blocks - it's the atomic unit of disk I/O); VACUUM consumes them. When the rate at which new unsorted blocks are produced exceeds the rate at which those blocks are consumed, it's game over. Your cluster will then degenerate into an unsorted state, which is to say, sorting will be being operated incorrectly, and Redshift operated incorrectly is *always* the wrong choice - there are better choices in that scenario.
I am quite sure this new real-time data-feed will produce unsorted blocks, and I am certain it will be gratuitously used by uninformed end-users (which is all of them, as AWS to my eye publish on meaningful information about Redshift at all), and it will I suspect consume a significant part of the cluster's capacity to consume unsorted blocks.
There's no free lunch here.
Redshift for the last however many years to my eye has had almost entirely *non*-Big-Data capable functionality added to it. I suspect this is more of the same.
I would add, as a warning, I consider AWS, as far as Redshift is concerned, to have a culture of secrecy, and to relentlessly hype Redshift, and to deliberately obfuscate *all* weaknesses. I consider the docs worthless - you read them and come out the other end with no clue what Redshift is for - and that the TAMs say "yes" to everything you ask them. Finally, I think RS Support are terrible; I think they have a lot of facts, but no *understanding*. My experiences with them, and the experiences I hear from other admin, are of just the most superficial responses and obvious lack of technical comprehension - but clients who are not aware of this are misled by the belief that they are talking to people who know what they're doing (and given how much they cost for enterprise support, they ought to be).
The upshot of all this is that I see a lot of companies moving to Snowflake. AWS have only themselves to blame. In my view, AWS need to publish meaningful documentation, so clients *can* learn how to use Redshift correctly, and then have Redshift only used by people who actually have use cases which are valid for Redshift, and move all other users to more appropriate database types (Postgres or clustered Postgres, or clustered unsorted row-store, such as Exasol, which is a product AWS do not offer).