Re: No thanks
Putting to one side questioning my storage knowledge simply because I advocate using native OS for storage subsystems rather than expensive appliances, I take issue with the general point you raise.
Whilst you are right that SMB support on non-Windows devices is still (amazingly) bloody awful, if you don't mind me saying so it sounds as if you misunderstand Storage Spaces Direct (S2D). S2D is the storage subsystem, not the presentation layer. You can of course deploy scale out file servers that sit on top of your S2D to present SMB, but in this context and in the majority of deployments you use S2D as the storage subsystem for your VM's to reside on, not for the OS/apps inside of a Linux VM to connect to.
Therefore you can quite easily and happily use what MS now call "Azure Stack HCI" (S2D + Hyper-V) and run Linux VM's on top, with the VHDX files living on a ReFS S2D volume. Linux VM's wouldn't even need to have SAMBA installed.
I'd much, much rather use SMB 3.02 over iSCSI as it's demonstrably better performing, easier to setup and administer and better resilience. Feel free to search for references, but if you're stuck I'll point you in the right direction.
The only time your concern would be valid in my view is if you are running a heterogeneous environment where your hypervisor is Linux based and you were running S2D as your backend storage subsystem for your VM's, which would be nuts. Might as well use the equivalent of S2D on your Linux distro of choice, either converged or hyperconverged.
Of course let's not forget that you could use the above model (S2D storage and KVM / Xen for the hypervisor), and simply present the S2D storage via NFS which Windows Server fully supports. That might be a goer if you have separate storage and compute teams, but generally speaking if I'm using Windows or Linux for the hypervisor for ease of use I'd likely use the same OS type for the storage layer too.