Kind of odd? Not at all.
>>Kind of odd IBM didn't use their own FlashSystem boxes for this config
Not at all, if you lift the covers. The FlashSystem is designed for high sustained write rates without hitting any write cliff - such as the log output from in-memory databases or the metadata store in Spectrum Scale. Therefore the chip technology used in IBM FlashSystems is higher grade and less dense. But it's really difficult to overwhelm it, whereas overwhelming SSDs (and getting bad response times) is simple.
*Many* larger Spectrum Scale clusters worldwide are using FlashSystems as metadata storage devices, with bulk data still going to large HDD JBODs. This is now changing: The price point of a DeepFlash 150 allows swapping out all disks for solid state storage.
It is still a good idea to place the metadata stream on a FlashSystem: A FlashSystem does 2D-RAID in ultrafast gate logic, while DeepFlash150 requires Spectrum Scale's native erasure coding software to provide device protection. For the bulk data, that process can be parallelized - but not for metadata operations.
If you decide to have both metadata and bulk files on the same DeepFlash device, it becomes an "entry" configuration, in the light of the above. Not recommended for an ultrascalable filer with heavy metadata updates, but ideal if most operations are read-only with no locking. Think of media servers, Hadoop analytics clusters, etc. An entry configuration will also happily support a collection of everyday-VMs at decent speed.
____________
PS: A stylish GUI (in latest XIV design) comes with Spectrum Scale 4.2. No more command-line fiddling after inital setup: For screenshots, see http://www.spectrumscale.org/meet-the-devs-oxford-2016/
____________
Disclaimer: I work for IBM. But you'll have guessed that.