The backend database on an HPC system is to store the model configuration, so you need ease of use and quick snapshots more than you need complex custom SQL triggers and report generation
Established 80 years ago this year, Los Alamos National Labs remains most famous for its central role in developing the first atomic bomb. But that belies the breadth of scientific research it has undertaken since, encompassing physics, chemistry and biology, and addressing the threat of COVID-19. Despite the breadth of …
I'm not criticising here, just observing.
Step 1. "The Department of Energy needs an HPC facility to simulate nuclear explosions and such."
Step 2. Money is allocated, and the HPC is installed.
Step 3. "This HPC thing works great, but we've got a lot of unused capacity here. It would make economic sense to run other scientists' workloads, too."
Step 4. DoE HPC becomes wildly popular among scientists using it to run their jobs. Many scientists and organizations want to use it.
Step 5. "This HPC thing still is working great, but we're uncomfortably-close to maxing it out. We need a faster HPC to cope with the workloads."
Step 6. Go to step 2.
Better mousetrap, and all that.
Really, it's a "simple" (heh) demonstration of the idea that good quality and service attracts customers.
Big dumb corporations try to replicate that success with the usual dirty tricks, e.g. vendor lock-in, monopoly strangleholds, anti-competitive behavior, overt and possibly deceptive marketing, and so on. Unfortunately the mania for short-term growth at all costs tends to reward their behavior.