back to article Virtualization moves to centre of mobile agenda

Six months after VMware announced plans for a mobile version of its virtualization platform – used to manage data center resources and PCs more efficiently – it says the technology will be seen in handsets next year, as it seeks to see off competition from mobile specialists such as VirtualLogix. Virtualization would enable …

COMMENTS

This topic is closed for new posts.
  1. Bernie

    Sounds like the solution to a problem

    nobody has.

    Wouldn't the time be better invested in making the phone's OS better? I fail to see how any of the stuff mentioned in this article can't be done just as well without virtualization.

    And storing a complete image of everything on your phone in the cloud? Google must be salivating.

  2. Richard Kay
    Boffin

    For specialised processing requirements maybe.

    "The results would be returned across the internet to the phone, speeding up tasks like graphics processing and supporting high end video or gaming. Intel even says CloneCloud would be able to decide dynamically whether a task would be better processed by the device itself or in the cloud, depending on its processing burden and the quality of the network connection."

    Currently only somewhat specialised processing requirements benefit from this approach, where the latency requirement isn't fast and bandwidth requirements of the job input/output is small, but the compute/memory demand of the job is high. I do this currently for spam content analysis where the MTA virtual server processor in a datacentre that looks at an email and accepts or rejects it offloads the content analysis to a faster CPU at home with more memory but slower bandwidth. But I don't see either of network bandwidth or latency being fast enough very soon to enable this to be done for graphics processing or high end video gaming, where a lot of CPU and memory has to be very close to the display right in front of the end users' eyeballs. Getting a faster computed response to a deep strategy game, e.g. go, using a Monte-Carlo simulated annealing approach maybe, assuming a very large parallel supercomputer is available over the network link rather than in front of the player. But again, this is a more specialised requirement.

    Automating this would usefully require an extra layer in the exec() layer of an application which collects statistics based on input/output and CPU/memory loadings. The administrator of the system in question would need to choose to add the overhead of checking for suitability for selected candidate jobs, or the overhead added by the extra layer involved in checking for suitability of all jobs would outweigh the benefit for the few jobs which would benefit.

  3. Anonymous Coward
    Anonymous Coward

    Say what?

    Is this 100% pure BS or what?

    Put the phone to one side for a moment, think of the PC equivalent. A Windows 98 "Personal Information Manager" in a VM does *not* instantly share info with a Windows 7 "Personal Information Manager" in a different VM, even though they are both running "at the same time" on the same hardware. They need common file formats and synchronized access to shared storage, otherwise why bother?

    Or is there something magick going on here which I have failed to understand?

This topic is closed for new posts.

Other stories you might like