Not server++
Maybe I'm optimistic, but those posters complaining on the grounds that this won't run their web service any faster (if at all, with it's new OS) have missed the point.
In the data crunching corner of the world, most of the innovation is around describing 'non traditional' processing tasks and then mapping those (very painfully) on to traditional hardware. Everyone will tell you that adding another off the shelf node to a compute cluster is cheap and you can expand to build a cluster capable of handling the large loads that big data, inference and graph compute problems throw up. The problem is that big clusters do not scale linearly when it comes to reliability, and network and disk effects mean that at least half of a cluster's energy goes into overcoming the dead weight of having compute resources that don't match the task. We don't actually want a vaster, more manageable cluster of Linux boxes, we want a compute resource matched to the process description.
Soooo.. as Mars shots go, this could make some sense. We're starting to describe processing in terms of directed graphs of actions, which can be mapped to both batch and real time work loads. An architecture that starts with the premise of many actors consuming a vast store of messages in a robust and scalable way would potentially outperform today's clusters by orders of magnitude. Given the cost of provisioning and maintaining a modern cluster, the exotic nature of the Machine may be a small price to pay.
I'm reminded of the early introduction of NUMA machines, which suddenly introduced capabilities that allowed tasks that used to be done by a building full of mainframes to be done by a box that sat under your desk. This architecture could potentially do the same to clusters - and not by virtualising thousands of machines into one box.