But designed to lose your data !
BEWARE THE DELIBERATE DATA LOSS
I've been a Parallels user for a looooong time, since it first came out in fact. But recently I found that it has a rather bizarre design decision built into it. If you never have a bad block affect one of your virtual disk images then you'll never see it - I hadn't until recently. If there's a read error on the host that affects the virtual disk, it doesn't manifest as a read error in the guest (meaning you could easily copy any non-affected files off the virtual disk). Instead they thought it made sense to offer two options, retry (yeah, like that's going to help), or nuke the guest.
Tech support were ... useless. I had to explain (and demonstrate) to each new technician that no, it doesn't work to just copy the virtual disk to another disk. I also had to explain that the inability to pass a read error up the chain is not an inherent restriction in virtualisation. And I forget what other silliness I had to put up with.
So instead of being able to recover any non-affected files off the guest using something like Carbon Copy Cloner, you either need to be exceedingly patient (rebooting the guest each time you hit a bad block) or just kiss your data goodbye.
And a special thank you to Apple for obfuscating the SMART data to the point it tells you the disk (SSD) is absolutely fine (SMART data verified) right up until the machine hangs up because the SSD has gone into read-only mode as it's used up it's spare blocks allocation.
So what is it with modern day software houses ? Doesn't anyone give a s**t about the user experience any more, or is it a case of "look at the ooh shiny, never mind if it f***s you over at the slightest fault" ?