What are you trying to do?
If I want to set up a fast data response.... that would be 3 x 2TB NVMe.
On a single node that's 6TB of raw fast disk.
Then you have 5 open SATA slots. You can put slower larger HDDS, or SSDs (4TB) which would be slower. I would rather use SSDs because 5x4 = 20TB raw space which is enough. You can get 8TB HDDs which is currently the sweet spot. Larger HDDs exist like 10 or 12 TB but you pay a premium. The other issue is the amount of heat and energy used by the HDDs.
So when you're limited by the number of M.2 slots, you will end up paying a premium for the higher density. Note, I would end up using 2 of the SATA SSDs mirrored for the OS and not boot from NVMe. While that may seem counter intuitive... my pcs run Linux and run 24x7x365.
Keep in mind... 2010 Hadoop clusters had 1 or 2 U boxes w 4 Hot Swap 3.5" SATA / SAS . 2 CPUs w 8 cores or 16 threads. (Xeon E5s) Drives were 2 or 4 TB depending on your budget.
Now you have a single CPU w 18 cores == 36 virtual cores. which would mean 4-5 older servers.
When you consider that... it puts things in perspective.
And yes, I'm not running a PC game on these machines. ;-)