back to article Dell exec: HP's 'Machine OS' is a 'laughable' idea

Dell thinks HP's bet-the-company approach on an entirely new open source operating system built around systems based on memristors and photonics is "laughable". When asked at a meeting in San Francisco on Thursday about their thoughts on HP's just-announced, memristor-based "Machine OS", Dell software chief John Swainson and …


This topic is closed for new posts.
  1. Captain Server Pants

    Truth hurts

    That these articles are being written without any mention of HP-UX is proof positive HP is hopelessly out to lunch. Why would a company with an existing somewhat modern version of Unix speak about building high memory bandwidth systems and ignore their own kit? Who do they intend to sell it to, because it can't be to their installed based of HP-UX? If HP is serious about a new enterprise OS then they need to figure out how to set an example with HP-UX. So far I would give HP an F- in OS ownership and roadmap. Truth hurts.

    1. Denarius

      Re: Truth hurts

      @Capn Server Pants. probably right. It appears the company has given up on HPUX in the C suite and is milking the corpse for as long as it can. There is no obvious support for the gallant crew trying to support customers systems, just more staff cuts and demands for longer hours.

    2. Anonymous Coward
      Anonymous Coward

      Re: Truth hurts

      One might even say that HP is a place where OSes go to die:

      - MPE

      - MPE/XL

      - tru64 unix

      - Domain/OS

      - SPP-UX

      - openVMS

      - HP-UX

      What are the odds on Machine OS surviving?

      1. Bronek Kozicki

        Re: Truth hurts

        IF there is a new open source OS then the odds of it surviving are pretty high. None of the above are open source, but look at Haiku (I didn't say "... and prosper").

      2. swschrad

        wouldn't you like to replace all that legacy with one OS, too?

        not to mention, The Machine is the perfect way to kick underperforming, overcosting Itanic out the window into a dumpster conveniently parked underneath.

      3. Mpeler

        Re: Truth hurts (HP where OS's go to die)

        Don't forget RTE (RealTimeExecutive) on the HP1000, Non-Stop (the TANDEM OS, might have been killed by DEC on the way to Compaq/HP), and whatever the OS was that was used by the Apollo machines before HP bought them...there was also the short-lived HP300, and HP250/260 with their nice (albeit almost unknown) OS's, and some instrumentation ones as well (HP2000(?), 2100, etc.).

        I think HP probably want to build an OS that's not part of the *UX universe. If nothing else, it would be good to get some variety out there (as said elsewhere about the innumerable descendants of MULTICS floating about....).

        Would also be a kick to see a non-traditional (aka von Neumann) architecture make it for real....the trick is getting it to actually run and be accepted....

        As someone noted, HP seemed to stop inventing about the same time that Carly stuck it onto the bottom of the emblem/sign/trademark/etc., so it would be good to see them REALLY doing it again...

  2. Anonymous Coward
    Anonymous Coward

    One company laughs at rival companies idea shocker

  3. Anonymous Coward
    Anonymous Coward

    Hilarious coming from a company that has NEVER done any R&D

    Dell just repackages Intel's reference designs and throws on MS software plus whatever crapware that pays.

    HP may not be the major R&D company they were in the past, but even this husk of the former HP, and even if you ignore this big moon project effort, does more R&D in a day than Dell has done in their entire history.

    1. GitMeMyShootinIrons

      Re: Hilarious coming from a company that has NEVER done any R&D

      Strikes me that an opinion of a company that doesn't do any competitive development would be a better choice than asking, for example, IBM.

      Especially given how Dell have developed a pretty successful footprint in the commercial tech market giving folk what they want rather than pie-in-the-sky.

      Is their opinion any less valid than yours? Like most, you have your own bias, and it is quite clear in your case that you see red when Dell or Microsoft are mentioned. Your choice, of course, but a tad prejudiced.

      1. Preston Munchensonton

        Re: Hilarious coming from a company that has NEVER done any R&D

        The hyperbole involved may indicate a bias, but the facts involved serve the basic point. Dell isn't focused on R&D and never has been, which does leave them free to act as system integrators without the research overhead or the innovation benefits. HP had effectively abandoned their long history of R&D since the Compaq merger, but appears to finally have figured out what virtually everyone had been saying since that time, that HP doesn't innovate anymore and that loss of identity really hurt their operations.

        Maybe that indicates bias to some. Personally, I think the facts speak for themselves. Dell is quite good at what it does, but HP needs this type of R&D to right the ship (even if I remain skeptical about "The Machine").

        Alternate this announcement the point we will call the RoTM?!?!? Icon, obviously.

    2. Anonymous Coward
      Anonymous Coward

      Re: Hilarious coming from a company that has NEVER done any R&D

      Mmmm, perhaps that is so.... but you can still customize and order a server a lot faster from Dell than you can from HP.

  4. W. Anderson

    Expected response from Microsoft minions Dell and The Register.

    It is not surprising that the article author noted "El Reg is of the same opinion as Dell", basically meaning that like Dell, the Register is quite skeptical of success in any Operating System (OS) Software other than Microsoft Windows - to which both entities are wedded at the neck - one by proxy and the other by marketing income.

    Similar skepticism was also expressed about the newly introduced Apple iOS and Android Mobile OS in their ability to succeed. Look at Mobile OS landscape now!

    1. Denarius

      Re: Expected response from Microsoft minions Dell and The Register.

      @W. A bit unfair. ElReg has to make a buck. Aside from that, mocking M$ is not fun anymore. They are too predictable in stuffups and just another IT company, seemingly specialising in making their applications harder to use. Maybe they should hire some Gnome developers to show them how to really antagonise their users. <duck>

    2. JackClark

      Re: Expected response from Microsoft minions Dell and The Register.

      Actually, my opinion is that it's confusing HP is doing a completely new open source OS rather than trying to do this within the Linux community.

      1. Bronek Kozicki
        Thumb Up

        Re: Expected response from Microsoft minions Dell and The Register.

        Absolutely agree. One just cannot reconcile aggressive timeline against effort required to write a new OS. It does not compute and leaves everyone pretty confused.

        Unless the idea is to make a new OS under GPL and heavily borrow from Linux (or perhaps more liberal BSD, like Apple did). That would be very interesting and might just make the deadlines (only slightly pushed back, by just few years).

  5. Uncle Ron


    Swainson is showing his "IBM'ness." IBM has always been a "fast follower" (sometimes not so fast) and was never the first out of the gate on any really new tech. IBM may have "invented" all of it, but they let others test the marketplace first. Dell has never really done any R&D so Swainson fits right in there, since Dell has no capability to be in this game.

    HP is probably right that to fully exploit Storage Class Memory (which is what memristor is) would require a new architecture, and a new OS. This new HW/OS combination would completely bypass the traditional I/O subsystem--which is a -huge- bottleneck. The OS would have to be aware of it, and probably (but not necessarily) apps would have to be aware.

    However, IBM is also aware of this. In a big way. IBM will certainly be a "fast follower" if HP digs up some paydirt. Of all the "big boys" out there, HP is possibly the most likely to be first out of the gate. But they won't be alone for long...

    1. Raumkraut

      Re: IBM'ness

      Exactly this.

      AIUI these nay-saying companies seem to be saying "we'll just stuff our existing software products on top of these new kinds of hardware", without realising that introducing something like memristors could transform the computer architecture in a way which defies the fundamental assumptions on which current-day operating systems are built.

      If HP do their Machine OS properly (rather than proprietarily), then it shouldn't matter so much whether it's memristors; battery-backed-up conventional RAM, crystal-lattice quantum dots; inverse-tachyon ion fields; or anything else that "wins" the hardware battle, so long as the basic assumptions of the OS hold true.

  6. Denarius

    all of the above

    HP had a great OS in HPUX, wedded to a great processor that was 10 years too late. Cheap Intel i7s can do all Merced aka Itanic was going to do except the big-endian/Little endian indifference. HP refused to go to Intel CPUs for HPUX so they have a legacy unix business. Both have a similar hardware businessl. Both are wedded to an OS and application environment that is static. Static means with other OS and applications growing, proportionally shrinking to a more modest part of the IT ecology. Don't worry Trevor, M$ will be around, baring disaster, for long while yet unless they do another Vista or Windows 8.0

    HPs stated goal seems to be a throwback to the golden 1980s of the propriety vendors. Does not matter if OS is open source if only one mob have the hardware. Will buyers accept that now? Doubt it. Critical issue is whether HPs proposed hardware can be built to an affordable budget and work as required. Given HPs recent habit of sudden start campaigns followed by catastrophic stops their announced goal needs to be taken with a shovel of salt. Dell may well survive, as they don't try to be leaders, but sell what sells. A satisfactory business goal. The other relics of the golden server age (IBM, Oracle) seem to have different aims. IMHO, oracle want tomorrow to be like today, only more so while IBM seems to be becoming a NPE, making money from a huge patent licensing pool. All of which makes ARM, AMD and Intel firms to watch, right up to the time the Chinese reduce them to a relic rump when their own hardware is good enough. Doubt it will happen but India or Brazil may also surprise the IT community.

    1. Anonymous Coward
      Anonymous Coward

      Re: all of the above

      If the new machines perform significantly faster than current ones people will buy them in droves, even if it is closed hardware and even if it was only available on a proprietary OS. The open source part is probably so they don't have to do all the heavy lifting themselves as much as anything, as interested parties like Linus Torvalds would probably do a lot of the porting work themselves for the price of some free hardware just because they'd find it such an interesting problem.

      The rule of thumb for getting people to switch horses hardware wise in the mini and workstation days was a sustained 2x performance advantage. That's been impossible to obtain, let alone sustain, by anyone for the past 15 years, which is why everything has slid over towards x86 as the cheapest alternative (as it offered better and better RAS features, and virtualization made RAS less and less relevant for many application types)

      This radical new architecture could easily manage an improvement well in excess of 2x (maybe way in excess of) IF they can figure out how to build it. I think if they manage to build it, they'll have no problem selling as many as they can make, and no one will care that HP is the only source for the hardware.

    2. ecofeco Silver badge

      Re: all of the above

      "Given HPs recent habit of sudden start campaigns followed by catastrophic stops"

      Recent? This has been ongoing for the last 10 years. Remember the iPAQs? That phone series was the first to have a camera built INTO the screen for video phone use. (yes, 2 cameras) It was the first to run virtual reality games! It was the first to run Windows Mobile! (ok maybe not something to brag about there) There were so many firsts with that phone series it wasn't even funny. Remember, this phone was the ONLY real competitor with Blackberry in those days.


      That is just ONE of the MANY products and services HP created and then dropped. You would think they were taking lessons from Microsoft.

  7. amanfromMars 1 Silver badge

    HP ... for an exotic blend of virtual spices commanding controlling cyber vices and services

    El Reg is of the same opinion as Dell: for HP's "Machine OS" to mean anything, it has to fully interoperate with legacy software,....

    Build a more sophisticated advanced Machine OS and/or captivating Virtual Machine OS and legacy software will have to fully interoperate with HP's new systems of operation, for the likes of a Dell or a Microsoft to survive and prosper in a novel working environment.

    And is El Reg of the same opinion as Dell, or is it just Jack Clark and few other mates in the office, for although it makes no difference to the likes of a Dell or MS or an HP or to anything really in the great schema of thing or to the Internet of Things, there is a difference.

    Clearly an alien view is that it is neither safe nor wise to bet against a new system of HyPer Machine Operation. And it is Friday 13th.

  8. Robert E A Harvey

    Al of that.

    I think that Dell rubbishing the idea of innovation is of a piece with Boeing giving up on innovation and all the other death-of-the-west retrenchment in businesses run by accountants and engineers . We desperately need innovation and R&D.

    But I too have my doubts that HP will make a success of this. Their track record in recent decades is not encouraging.

  9. ingeniering

    well HP !!!

    Dell for years copies strategies of others ( doesn't have own ideas), it is not strange that doesn't "understand" the strategy of HP, Hewlett Packard has working to 400 scientists in the project memristor-photonics that will suppose a great advance in the computer science

  10. ingeniering

    HP well !!!

    Dell for years copies strategies of others ( doesn't have own ideas), it is not strange that doesn't "understand" the strategy of HP, Hewlett Packard has working to 400 scientists in the project memristor-photonics that will suppose a great advance in the computer science

  11. tempemeaty
    Big Brother

    Should Dell's software chief and head of research point fingers?

    What is DELL doing to create an alternative OS?

    1. Anonymous Coward
      Anonymous Coward

      Re: What is DELL doing to create an alternative OS?

      Waiting for the word from Redmond, same as they've always done.

  12. Dan 55 Silver badge

    This man knows what he's talking about

    "I have been through at least three operating- system changes," he recalled, "all of which promised far more than they actually delivered, and all three of which are actually still in existence."

    That's Windows XP, Windows Vista and Windows 7. Never let it be said that Dell doesn't do any R&D.

  13. Anonymous Blowhard

    "Dell will bide its time and develop software when a clear winner emerges."

    More like "Dell will bide its time and develop software when hell freezes over"

    For those wondering why Linux wouldn't be the O/S of choice for the new architecture, you have to look at what HP is doing. Up to now, computers have had their bulk storage and working storage on separate media, and a lot of O/S functionality is there to manage that; HP's intention is to create computers that just have a single, persistent, storage system:

    Sometimes you have to stick your neck out to get ahead; and HP seems to be doing that. The rest of the computer industry could be looking at a potential game-changer here, don't expect Queensberry Rules.

    1. Frank Rysanek

      Re: a game-changer

      That's a good idea for handhelds. Chuck RAM and Flash, just use a single memory technology = lower pin count, less complexity, no removable "disk drives". Instant on, always on. A reboot only ever happens if a software bug prevents the OS from going on. Chuck the block layer? Pretty much an expected evolutionary step after Facebook, MS Metro, app stores and the cloud...

    2. noominy.noom

      @Anonymous Blowhard

      The "single, persistent, storage system" sounds like the IBM i. SLS or Single Level Store. The apps don't have to think about where the data is. Memory or disk are not differentiated. It is just storage.

  14. Frank Rysanek

    Ahem. scuse me thinking aloud for a bit

    The memristor stuff is essentially a memory technology. Allegedly something like persistent RAM - not sure if memristors really are as fast as today's DRAM or SRAM.

    The photonics part is likely to relate to chip-to-chip interconnects. Not likely all-optical CPU's.

    What does all of this boil down to?

    The Machine is unlikely to be a whole new architecture, not something massively parallel or what. I would expect a NUMA with memristors for RAM. Did the article author mention DIMMs? The most straightforward way would be to take an x86 server (number of sockets subject to debate), run QPI/HT over fibers, and plug in memristors instead of DRAM. Or use Itanium (or ARM or Power) - the principle doesn't change much.

    Is there anything else to invent? Any "massively parallel" tangent is possible, but is not new - take a look at the GPGPU tech we have today. Or the slightly different approach that Intel has taken with the Larrabee+. Are there any gains to be had in inventing a whole new CPU architecture? Not likely, certainly not unless you plan to depart from the general von-Neumannean NUMA. GPGPU's are already as odd and parallel as it gets, while still fitting the bill for some general-purpose use. Anything that would be more "odd and parallel" would be in the territory of very special-purpose gear, or ANN's.

    So... while we stick to a NUMA with "von Neumann style" CPU cores clustered in the NUMA nodes, is it really necessary to invent a whole new OS? Not likely. Linux and many other OS'es can run on a number of CPU instruction sets, and are relatively easy to port to a new architecture. Theoretically it would be possible to design a whole new CPU (instruction set) - but does the prospect sound fruitful? Well not to me :-) We already have instruction sets, and CPU and SoC flavours within a particular family, and complete plaftorms around the CPU's, suited for pretty much any purpose that the "von Neumann" style computer can be used for, from tiny embedded things to highly parallel datacenter / cloud hardware.

    You know what Linux can run on. A NUMA with some DRAM and some disks (spinning rust or SSD's). Linux can work with suspend+resume. Suppose you have lots of RAM. Would it be any bottleneck that your system is also capable of block IO? Not likely :-) You'd just have more RAM to allocate to your processes and tasks. If your process can stay in RAM all the time, block IO becomes irrelevant, does not slow you down in any way. Your OS still has to allocate the RAM to individual processes, so it does have to use memory paging in some form.

    You could consider modifying the paging part of the MM subsystem to use coarser allocation granularity. Modifications like this have been under way all the time - huge pages implemented, debates about what would be the right size of the basic page (or minimum allocation) compared to the typical IO block size, possible efforts to decouple the page size from the optimum block IO transaction size and alignment... Effectively to optimize Linux for an all-in-memory OS, the developers managing the kernel core and MM in particular would possibly be allowed to chuck some legacy junk, and they'd probably be happy to do that :-) if it wasn't for the fact that Linux tries to run on everything and be legacy compatible with 20 years old hardware. But again, block IO is not a bottleneck if not in the "active execution path".

    It doesn't seem likely that the arrival of persistent RAM would remove the need for a file system. That would be a very far-fetched conclusion :-D Perhaps the GUI's of modern desktop and handheld OS'es seem to be gravitating in that direction, but anyone handling data for a living would have a hard time imagining his life without some kind of files and folders abstraction (call them system-global objects if you will). This just isn't gonna happen.

    Realistically I would expect the following scenario:

    as a first step, ReRAM DIMM's would become available someday down the road, compatible with the DDR RAM interface. If ReRAM was actually slower than DRAM, x86 machines would get a BIOS update, able to distinguish between classic RAM DIMM's and ReRAM (based on SPD EEPROM contents on the DIMMs) and act accordingly.

    There would be no point in running directly from ReRAM if it was slow, and OS'es (and applications) would likely reflect that = use the ReRAM as "slower storage". This is something that a memory management and paging layer in any modern OS can take care of with fairly minor modification.

    If ReRAM was really as fast as DRAM, there would probably be no point in such an optimization.

    Further down the road, I'd expect some deeper hardware platform optimizations. Maybe if ReRAM was huge but a tad slower than DRAM, I would expect another level of cache, or an expansion in the volumes of hardware SRAM cache currently seen in CPU's. Plus some shuffling in bus widths, connectors, memory module capacities and the like.

    So it really looks like subject to gradual evolution. If memristors really turn out to be the next big thing in memory technology, we're likely to see a flurry of small gradual innovations to the current computer platforms, spread across a decade maybe, delivered by a myriad companies from incrementally innovating behemoths to tiny overhyped startups, rather than one huge leap forward delivered with a bang by HP after a decade of secretive R&D. The market will take care of that. If HP persists with its effort, it might find itself swamped by history happening outside of their fortress.

    BTW, the Itanium architecture allegedly does have a significant edge in some very narrow and specific uses, from the category of scientific number-crunching (owing to its optimized instruction set) - reportedly, with correct timing / painstakingly hand-crafted ASM code, Itanium can achieve performance an order of magnitude faster than what's ever possible on an x86 (using the same approach). This information was current in about 2008-2010, not sure what the comparison would look like, if done against a 2014-level Haswell. Based on what I know about AVX2, I still don't think the recent improvements are in the same vein where the Itanium used to shine... Itanium is certainly hardly an advantage for general-purpose internet serving and cloud use.

    As for alternative architectures, conceptually departing from "von Neumann with NUMA" and deterministic data management... ANN's appear to be the only plausible "very different" alternative. Memristors and fiber interconnects could well be a part of some ANN-based plot. Do memristors and photonics alone help solve the problems (architectural requirements) inherent to ANN's, such as truly massive parallelism in any-to-any interconnects, organic growth and learning by rewiring? Plus some macro structure, hierarchy and "function block flexibility" on top of that...

    I haven't seen any arguments in that direction. The required massive universal cross-connect capability in dedicated ANN hardware is a research topic in itself :-)

    Perhaps the memristors could be used to implement basic neurons = to build an ANN-style architecture, where memory and computing functions would be closely tied together, down at a rather analog level. Now consider a whole new OS for such ANN hardware :-D *that* would be something rather novel.

    What would that be called, "self-awareness v1.0" ? (SkyOS is already reserved...)

    Or, consider some hybrid architecture, where ANN-based learning and reasoning (on dedicated ANN-style hardware) would be coupled to von Neumann-style "offline storage" for big flat data, and maybe some supporting von Neumann-style computing structure for basic life support, debugging, tweaking, management, allocation of computing resources (= OS functions). *that* would be fun...

    Even if HP were pursuing some ANN scheme, the implementation of a neuron using memristors is only a very low-level component. There are teams of scientists in academia and corporations, trying to tackle higher levels of organization/hierarchy: wiring, macro function blocks, operating principles. Some of this research gets mentioned at The Register. It would sure help to have general-purpose ANN hardware miniaturized and efficient to the level of the natural grey mass - would allow the geeks to try things that so far they haven't been able to, for simple performance reasons.

  15. Steve Sims

    Changing times

    Typical Dell nonsense, but what more should one expect from a box-shifting outfit.

    Also the usual backwards thinking from many, with a "must have backwards compatibility" starting point.

    The big issue I have with the cry for backwards compatibility is the fact that this new machine has a fundamentally different hardware architecture to that which current OSs were designed for. It's an opportunity to think again - to come up with something new that can best exploit the new architecture, rather than forcing an old paradigm to work.

    To best exploit this new hardware architecture everything needs to change, and one will need new software architectures to do that. When what you have is a machine that essentially has a massive amount of persistent RAM and no disc storage concepts like "booting", "starting an application", "saving a file" or even "disk filing system" can go out the window, as does issues like "seek time" and "disk latency". Carrying those over from a legacy OS will just bring inefficiency with them.

    Of course backwards compatibility is important, but the way to provide that is with virtualisation/emulation. Machine OS could provide a virtual environment to a *nix OS that looks like there is a regular disk and memory system. It would be very sad though if that's the only way to run software on such a system, as most of the benefits would be lost.

  16. Anonymous Coward
    Anonymous Coward

    "It doesn't seem likely that the arrival of persistent RAM would remove the need for a file system."

    Given enough address space and enough non-volatile RAM, why is there any need for a legacy-style file system on nonvolatile (but slow) storage. (I'm ignoring administrivia like backups etc).

    Readers might want to have a look at MUMPS, which some folk now refer to as M. Current implementations sit on top of a filesystem because that's the way things are currently done, but the applications just know about named data objects. Hellishly unfashionable, but seemingly good at what it does.

    Starts with an M, too. Like, for example, Machine.

    1. Frank Rysanek

      Re: no need for a file system

      IMO the abstraction of files (and maybe folders) is a useful way of handling opaque chunks of data that you need to interchange with other people or other machines. Keeping all your data confined to your in-memory app doesn't sound like a sufficient alternative solution to that "containerized data interchange" purpose.

      1. Anonymous Coward
        Anonymous Coward

        Re: no need for a file system

        "Keeping all your data confined to your in-memory app doesn't sound like a sufficient alternative solution to that "containerized data interchange" purpose."

        Perhaps someone with sufficient MUMPS knowledge can chime in on how real-world M applications (of which there are still a few) do that kind of thing? 'Cos afaict that's conceptually part of what they do.

    2. Anonymous Coward
      Anonymous Coward

      Yeah - MUMPS - really advanced, great idea, goes well with HP's Machine

      Take a look at this amazing technology:

      The M-Technology (Mumps) :

      The Annotated MUMPS Standards:

  17. cloudscom

    Nothing new under the Sun?

    Everything in today's computer architecture, from the cpu itself on out, has been designed to overcome latencies because the parts needed were too expensive or too slow. For the cpu this has become hierarchical cache for instruction and data. For storage this has become spinning rust with various controllers and caches between it and main memory. And the software i/o subsystem to manage it all.

    What if the main memory was both fast enough and large enough to provide all of a typical systems storage needs in main memory? Something like a TB or so?

    Would this eliminate the need for MORE persistent storage? No. But it would eliminate the need for the storage hierarchy that we have today with i/o controllers and disk drives.

    And it would eliminate the need for much or all of the software which assumes that memory is small and therefore tasks and data must be copied in or out to persistent storage. THAT is the very large impact that TB sized persistent main memory will have on computer systems in the near future.

    Sure, there are ways to patch around the elimination of much of the storage subsystem in many OSs. But the concept of tasks and data kept persistently in main memory is a new one and will not be easily handled by today's OSs, be they Linux or whatever. Some new thinking is required.

    I think Dell's comments are hilarious and indicate that HP is on the right track. If memristor or other persistent memory technologies can achieve the scale they seek then it will be a whole new world for computer architecture.

    1. Anonymous Coward
      Anonymous Coward

      Re: Nothing new under the Sun?

      "But the concept of tasks and data kept persistently in main memory is a new one"

      Well yes, assuming everybody's forgotten core memory.

      But things were a bit different back in those days. Back then, addressable memory often was smaller than maximum configurable physical memory (eg PDP11: 64kB addressable, maximum 4MB configurable, need co-operation from the OS to move your 64kB between different parts of the 4MB).

      Today's 64bit architectures have that the other way round: logically addressable memory is (much) larger than physically configurable memory (or indeed addressable storage on disk or whatever) will ever be. So why not acknowledge that fact?

      640TB should be enough for anybody, right?

  18. swschrad

    suppose MachineOS is actually a selector?

    friend IBM probably has ten OS running on a Z-series mainframe by now, from JCL to containers holding windows, linux, and the like. if MachineOS is the manager for a bunch of system-containers like that or like Citrix, it looks like a chameleon, it shifts like a chameleon, but the user thinks they're running a native whatever. if the core and memory are agile enough, why the heck not? that's not new. not a 10-year project, either.

    we've had a lot of new OS developed since the Univac 1, and there are always a bunch of young guys who want the chance to gin something up from scratch and get their name in the books.

  19. amanfromMars 1 Silver badge

    Advanced Persistent Cyber Treat* …. and/or APT ACT App for Future HyperRadioProActive IT Builders

    And a little something extra XSSXXXXual for those Weakened at Weekends Edutaining the Masses with the Great Game

    What I suggest y’all be missing and be not yet quite grasping is …… Get hold of the Bigger Universal Picture and take a closer look at its IT and Collapsing Systems of Operation, and analyse to reverse engineered basics the truths and falsehoods of its product, to discover the vital fundamentals to strategically target and master reprogram/create and destroy to disrupt and rebuild with HyperRadioProActive New Clarity, which Presents and Supports with a Mutually Advantageous and Positively Reinforcing Security and Virtually Remote Protection and Practically Anonymous and Relatively Autonomous** Regimes/Programs/Projects/Pogroms/Systems of Absolutely Fabulous Fabless Executive Administrative Operation, a Novel Future in SMARTR Command and CyberIntelAIgent Control of Leading Intellectual Property Assets and not being at all just so utterly reliant on the invention and intervention and printing and production and instant electronic transfer of pretty paper ponzi wealth/fiat currency loading and loding.

    That Old Practical System of Remote Virtual Command and Control is discovered to be easily hacked and cracked wide open and subject to rampant abuse to corrupt and perverse extreme ends by principals with morally bankrupt principles and an unedifying and revealing lack of necessary future building intelligence and sensational sensitive information.

    So what’s IT gonna be ….. A Western Delicacy to Savour and Favour with Spooky Spaghetti Support or an Eastern Delight to Behold and Server from Numerous Units? Or Both Together and/or Neither and Something Else Altogether and a Treat from somewhere else? Dealer’s Choice?!.

    and there are always a bunch of young guys who want the chance to gin something up from scratch and get their name in the books. … swschrad

    :-) And always the older and wiser and more experienced who are determined and enabled and able to make sure their names are not common knowledge in books of their exploits and discoveries.

    * And Established Systems Threat via Virtual Machine OS Artilecture? Be hereby clearly advised .... Do not attack it to find out. The consequences are catastrophically dire.

    ** One wouldn’t anyone be thinking just yet that Virtual Machinery has a Mind of ITs Own and can do just as IT pleases, whenever and wherever IT pleases. That would really freak out the natives and primitives and they be crazy enough presently as IT is.

    1. Anonymous Coward
      Anonymous Coward

      Re Advanced

      "One wouldn’t anyone be thinking just yet that Virtual Machinery has a Mind of ITs Own" - of course one would, amfM, if only it had fingers to type in at ElReg LEAVE THIS CURRENCY IMMEDIATELY in Capitalis. String value wouldn't have looking necessarily spooky for the targeted human audience - ... and even the Machinery can hardly explain 3.5M+ views.

      1. amanfromMars 1 Silver badge

        Re: Re Advanced, and to APCT Opposition, which be Useless Sub-Prime Drone Competition, a warning.

        The flip side of the COIN coin ....... and Counter INtelligence on its knees and outing itself as a global terrorist threat ready for extermination ...........

        As the Guardian's Nafeez Ahmed concludes, "Minerva is a prime example of the deeply narrow-minded and self-defeating nature of military ideology. Worse still, the unwillingness of DoD officials to answer the most basic questions is symptomatic of a simple fact – in their unswerving mission to defend an increasingly unpopular global system serving the interests of a tiny minority, security agencies have no qualms about painting the rest of us as potential terrorists."

        We wonder: why is that surprising - by the time the "mass civil breakdown" is set to take place (and grand central-planning experiments by the Fed and its peers will merely accelerate said T-zero Day), virtually everyone who poses even the tiniest threat to the collapsing regime will be branded a terrorist.

        Since as we reported previously, yet another current version of what previously was merely science fiction, namely the arrival of pre-crime, or where a big data NSA "pre-cog" computer will determine who is a future terrorist threat merely based on behavioral signals, is just around the corner too, it is simply a matter of time before men in gray suits or better yet - drones - quietly arrest any and all potentially threatening social network "nodes" of future terrorist behavior on the simple grounds that their mere presence threatens the status quo with an even faster collapse.

        And now, just ignore all of the above, and keep buying stocks, because all is well: these most certainly aren't the droids you are looking for. ...... MAD Minerva Research Initiative

        To drones and robots, botnets and virtual machinery, is planet Earth just as a puny stage to test to destruction and/or failsafe confirmation, Creative and Super IntelAIgent Solutions with CHAOS* in Command and Control of the Madness in Mayhem which has Consorts in Greed to Supply Worthless Needs and Incestuous Seeds.

        * Clouds Hosting Advanced Operating Systems which be an Omniscient and therefore Omnipotent and Intangible Foe and Virtual Phantom Friend one Denies to Experience Great Loss and Exquisite Pain. Embrace the Artilecture and the Future is Designedly and Decidedly Quite Heavenly for Global Operating Devices.

  20. Anonymous Coward
    Anonymous Coward

    Don't see the need for a new OS really...

    I agree with some of the posters here that a wholly new OS is likely not needed. Linux first started on single core 32 bit chips and has been able to keep up with a great deal of change, including multi-core, GPU accelerators, etc.

    So, I would not be surprised if HP's comment about a wholly new OS were just a bit of misinformation. (Conversely, I also expect Dell's comments to be somewhat disingenuous. What are we expecting them to say? "Blast us and crush us, my precious! All is lost to the Machine!" Dell are uncreative bean counters, but certainly not fools.

    If I understand the new machine architecture, it is supposed to be more uniform than other architectures, with fewer different components... I therefore expect the uniformity of this new machine architecture to manifest at the platform and hardware level, not so much at the software level.

    The Machine could therefore be a cluster of essentially identical black boxes in standard racks, with no need for SANS or other storage devices. The flexibility of this architecture would be that any subset of the boxes could b dynamically reconfigured to suit a purpose, be it compute, store, whatever. If a box has e.g. N New Machine "cells", you could re-purpose some fraction of N to be dead storage, some portion to be active compute, some portion to be web servers, etc.

    If any cell can do anything, then this new cluster is like a sea of programs floating on new machine cells, and a hypervisor-like grand orchestrator reconfigures the cells on the fly as it moves programs around to the least-used portions of the machine.

    No, I don't see the need for a "new OS." What is needed may be more than just a custom kernel, but a great deal less than a completely new OS. A POSIX compliant OS, largely based on open-source Linux, with a stripped down kernel and a vastly expanded concept of NUMA may be enough. (I still think NUMA will be needed, because light is not instantaneous. There will still be an implicit hierarchy in that some compute resources needed by program A will be further away than others; hence, a grouping of nearly uniform :-) devices may still be necessary: "all these devices are n nanoseconds from each other".)

  21. Old_Polish_Proverb
    IT Angle

    Race Cars

    I see development of "The Machine" to be the equivalent of an automobile manufacturer's race car division. The engineers get to experiment with the exotic while the managers see if any of it can be exported to standard production vehicles. In the meantime, HP as a company, gets a huge boost of employee morale and lots of media attention for its cutting edge innovation

This topic is closed for new posts.

Other stories you might like