Dev drive wouldn't be needed if M$ woudl optimize its NTFS usage.
Two things not only could, but are proven to speed up a lot, even on SSD: Get the $MFT defragmented into one chunk, and reorder the entries in the $MFT to group files of one directory together. Second would be to defragment the Directory, i.e. the clusters which store the directory structure, and group them together.
I have to do regulary (every two to thee years) for an quite active drive:
1. Backup that thing (robocopy is enough).
2. Format the thing fresh.
3. robocopy back with /CREATE, which creates the $MFT entries and directory storage in one place, since it creates ZERO LENGTH files.
(3.1 do that again somewhere else on the target drive as dummy to force the $MFT to grow early on and in one piece - see fsutil.exe fsinfo ntfsinfo d:, all lines starting with MFT)
4. robocopy the real data back, same as with 3. but leave out /CREATE.
(4.1 don't forget to nuke the dummy created in 3.1)
Result: Same data on the SSD, but hell, why is the access suddenly so fast? Why does bitlocker unlock suddenly take only a fraction of a second instead of four seconds? Simple! See my complaining above what M$ should improve with their defrag tool in the first line of this comment. And then we would not need a pseudo ReFS marketing push which reminds you on and on how mature ReFS actually is.