Cloud versus on-prem in depth.
It's important not to be fooled by what large players are doing. Even in the cloud space. Not everyone is using JBODs, or relying on object storage. There is an awful lot of hyperconvergence and a lot of traditional SANs amongst cloud/service providers.
This is because there are a lot of different use cases to be met.
If you are rewriting everything from scratch as "cloud native", using microservices, using application-level redundancy that doesn't need shared storage, or rely on storage to provide the backups/DR/resiliency for you, then you can absolutely get away with what amounts to a bunch of individual nodes with some disks in them and some software that takes periodic snaps for DR.
But this is only possible because these applications are meeting their RPOs by disbursing data horizontally across multiple nodes at the application level. This flat out doesn't work for legacy applications: the kind most businesses are reliant upon, and unlikely to recode for decades yet to come.
Legacy applications are "pets", not "cattle", and trying to get them to a "cattle" state takes an awful lot of management software layered on top. Configuration management, desired state configs, some means to separate the application from it's data and make that data highly resilient.
Traditional SANs and more modern hyperconvergence have been solving the "make the data highly resilient" problem for ages. If your application goes down, or the node hosting that application goes down, the data is there, accurate to the last bit, ready to be reconnected to. RPO of 0, RTO of however long it takes to restart the application.
But when you look at cloud solutions really closely, a lot of this goes away. If the resiliency isn't in the application then you start to have to make compromises on the RPO of the data side. An HCI cluster or even a traditional SAN spread across multiple racks in multiple power zones with redundant switching can be done relatively cheaply and easily. This gets a lot more difficult with JBOD "clusters", and you flat out can't provide this sort of resiliency in a "everything is local, no shared storage at all" setup.
Now, all storage has limits. If you have enough network connectivity then you can do real-time replication of your data layer across metro areas. Keep the latency below 10msec, and you're probably good for 99.5% of real-world applications. HCI and SAN solutions have this taped. Not a problem. This can provide real-time RPO of 0 across multiple datacenters, and it honestly isn't that expensive, if you're playing the game at datcenter scale.
Beyond that metro area, however, and HCI/SAN suffer from the same problem as JBODs or the "all local" server solutions favoured by some cloud providers: all they can do it send periodic snapshots places.
This means that your RPO goes above 0. Any time you experience an unexpected failure you loose data.
For some workloads that doesn't matter. If you have a web server, for example, managed by a desired state configuration tool like Puppet, then the worst case scenario is that you fail over to some older version of that workload, Puppet detects that the config isn't the latest, and then sets about re-applying the most modern config. So far, so groovy.
But this gets a lot more miserable for databases, files and anything else you want to store. Here is where rewriting and recoding and redesigning everything becomes the thing that cloud providers want you to do.
If you can recode your application to use object storage instead of file storage then you can store images and the like in the object storage. Storage that – wait for it – replicates the objects over the network to provide resiliency. Just like a SAN or HCI does. It's just cheaper to the cloud provider to set up that storage. It's also usually dog slow.
Databases go on fast storage, but you're encouraged to set up complicated database replication schemas to ensure that data is replicated between sites. Replication that occurs over the network. It's just not using the SAN or HCI storage to do it. This is usually pushing the cost back on to the customer, who pays for network traffic, and now has to pay for multiple database instances and multiple storage instances. Great for the cloud providers!
So if you're running a great big globally distributed application that needs to be coded efficiently to run around the world, tearing up your old applications in order to make them cloud native makes sense. This is because no storage mechanism is going to provide an RPO of 0 across oceans. The speed of light is a problem.
But for the overwhelming majority of workloads run by the overwhelming majority of companies, this simply isn't a real-world requirement.
Most companies are happy if more of their workloads have an RPO of 0 for daily use and an RPO of 15 minutes for disasters. They are usually okay with RTOs of "hours" in recovering from disasters, because "disasters" in this case are affecting their customers as well. Some workloads (such as your website) you want up 100% of the time, but that is cheap and easy to do using a SaaS solution, and doesn't require the whole rest of your infrastructure to be a globally distributed solution that's always available.
Many companies who need more resiliency than "within the same datacenter" are just fine with "RPO of 0 at a metro level" clustering and "RPO of 5 minutes snapped to a different geo". Slightly higher needs, but again, if the city ends up flooded out or somesuch, customers will generally understand a brief outage, or a 5 minute data loss.
So this leaves us with the minority of workloads and the minority of companies who absolutely need completely bullet-proof workloads that have world-wide geo-resiliency. These companies with these workloads need to, should, will and are recoding their applications to take advantage of what public cloud has to offer.
Again: what's important to note here is that the extremes don't apply universally. Perhaps more critically...there is zero incentive for most companies to move the majority of their workloads to the public cloud. Not as IaaS, not by rewriting them as SaaS.
SANs, NASes and hyperconvergence will be around for decades yet. And they'll sell in good volume. Because they're simple. Because they do the job. And because, ultimately, "cloud native" isn't the solution to all ills.