back to article Apple reportedly plans ARM shift for laptops

Apple may - and we emphasis that last word - have decided to transition its laptops from Intel processors to ARM-based CPUs Intel certainly has a fight on its hands in the media tablet market, currently dominated by ARM chippery, but does it need to worry about the laptop space too? It will if the allegation about Apple, made …

COMMENTS

This topic is closed for new posts.
  1. Marco van de Voort
    Thumb Down

    Apple != Laptop market

    Apple is the part of the laptop market that _is_ movable. If you count 64-bit versions, it will just be Apple's 6th (m68k, ppc,ppc64, x86,x86_64,Arm) architecture change.

    I think the early netbook situation demonstrated that the PC crowd is not very appreciative to everything which is not dog standaard and Windows based.

    So even if Apple makes this move, I don't think it is a telltale for laptops as a whole

    1. The BigYin

      Apart from...

      ...the fact the Windows will apparently run on ARM too.

      So not only will users get low-power and portability, but also the rich experience of *the* standard environment.

      Pffffttt............giggles

    2. ThomH

      Could be more telltale than you think

      Even if this were an official announcement from Apple, it'd have a little of the 'me too' to it, given that Microsoft has already announced an ARM port of Windows 8. Obviously the difference is that if Apple decide they want ARM then you stop being able to buy an Intel Mac anywhere, but supposing Apple were to switch and to demonstrate gains in doing so then the door will be completely open for companies that ship Windows machines to introduce competing devices into their ranges.

      So: Apple's move could start a trend, or at least have more of an impact than just on the tiny OS X audience. Though you'd have to buy into the version of events where Apple are highly influential in everything they do rather than just occasionally influential in some areas; assuming genuine benefits do appear from ARM laptops then I'd expect Windows manufacturers to offer devices anyway, and quite possibly sooner.

      1. Anonymous Coward
        Anonymous Coward

        Re: "have more of an impact than just the tiny OS X audience"

        To understand the impact don't look at the market share stats, just go to a place like the British Library.

        Why? Because it is in a place where thinking intelligent people congregate in a cosmopolitan centre of a city with global clout. You have a mixture of young people, old people, students and thinkers and entrepreneurs using the cafe for ad-hoc business meetings. Last three times I was there, I did a quick survey of the make of machines in use in the cafe. Two out of the three times Apple MacBooks came out at over 50% share, the other time at 40% share. Total sample size is now probably about 50 machines, so statistically significant.

        What does this say?. Apple share is much greater than top level market share stats when you look at opinion formers, people with get up and go and deeper thinkers. Given what I know about the market share stats, I was amazed. But then it struck me, of course those stats will be wildly different. They include the PC's running the booking systems of your local car repair service. The PC's of service centre staff who have little interest in their work or the machine they are running. The PC's of office workers counting down the hours till they can go home. Of course PC usage isn't confined to this set but looking at market share alone says very little about the true influence and significance of OSX. PC's are associated with tired thinking and uninspired work far more than is the case for Macs. In pretty much any local Starbucks in London, you will find Mac usage up at at least 30-40% (can't speak for other towns/cities as it's only recently I started noticing the truth of this)

        1. DomS
          FAIL

          Lol

          This argument (AC 07:58) is quite possibly one of the worst arguments that I have ever read. Macs are good because people in Starbucks use them. And Starbucks' clientele are clearly the height of society. Lmao. Oh dear..

          1. Marvin the Martian
            IT Angle

            @Doms

            Which part of "the British Library" you cannot understand? Note "the", not "a".

            You kind of corroborate the argument, so try again.

            1. Tom Wood

              If

              there isn't a Starbucks in the British Library, it's probably not for lack of target audience.

              1. jonathanb Silver badge

                For those stateside

                If you don't understand "the British Library", think "Library of Congress".

        2. Captain Underpants
          Thumb Down

          @ AC 07:58

          I understand you're trying to suggest that OS X and FruitMachines are popular with those who know of which they speak, and those who act as tastemakers.

          I agree to some extent (I'm a sysadmin in a university and it's astonishing how many post-docs and professors want to buy Macs for work usage, as long as they're not paying) but there are two crucial problems with your argument:

          1) The "taskemakers" you're talking about don't necessarily know anything about the computers they use, and are just as vulnerable as the rest of the plebeian masses to marketing. Believe me, there are some exceptionally intelligent minds conducting pioneering research where I work, and yet they have all the knowledge/interest in computing of a bored ten-year-old.

          2) For the influence of the tastemakers to filter down throughout the userbase, Apple would have to offer computing options for all wallet sizes, and it's evident they have no interest in doing this. Want to know why service centres use Dell or HP or even DNUK boxes rather than FruitMachines? Because Apple machines cost more, without providing a specific advantage to justify the expenditure. Hell, even with the academic discount in place Apple hardware tends to be at least a bit more expensive than similarly-spec'd equipment from rival vendors.

          3) As for "posers in Starbucks tend to use Apple hardware", so what? Am I supposed to extrapolate that because they've got shit taste in coffee alongside a willingness to pay over the odds for it, their opinion is important?

        3. Anonymous Coward
          Anonymous Coward

          The title is required, and must contain letters and/or digits.

          Wow, that's some pretty serious assumption going on then.

          I suspect there are plenty of pseudo intellectual posers down the British Library, I personally don't see the point of going to a Library to get information when I've got the fecking Internet.

  2. Drew24x7

    Something the Navy could use...

    With the Defence review scrapping carriers it may be possible for the Royal Navy to be on the market to buy a batch of ARM powered Macbook Airs...

    and call it the "Fleet Air ARM"

    1. Baskitcaise
      Black Helicopters

      "Fleet Air ARM"?

      WAFU SNAFU?

  3. GettinSadda
    Boffin

    Emulation

    I wonder how hard it would be to convert an application from x86 to ARM at, or shortly after, installation time, rather than emulating x86 at run-time. A good proportion of the code should be easy enough to automatically recode, and I would expect that anything dodgy (such as self-modifying code) could be detected and passed to an emulator.

    1. Ocular Sinister
      Boffin

      It depends....

      On how well written the program is. I've worked on legacy apps in the past that will never get ported to ARM simply because they make too many assumptions about byte order/size. Heck, I don't think it would be practical to port one of those apps to 64bit, yet alone an architecture that swaps the endienness!

      1. bhtooefr

        A few things that help...

        ...first, Mac OS previously ran on big endian CPUs (m68k, PPC,) and now runs on a little endian CPU (x86.) So, endianness is already dealt with in OS X.

        Second, any endianness issues that have crept in since the PPC->x86 transition won't affect ARM - ARM is typically run in little endian mode. (And, ARM can run in a big endian mode, too.)

        Finally, Apple started a 64-bit transition not long after starting the x86 transition. (And, IIRC, they did the 64-bit transition twice - once on PPC, then once again on x86.)

      2. ElReg!comments!Pierre

        Re: It depends

        "Heck, I don't think it would be practical to port one of those apps to 64bit, yet alone an architecture that swaps the endienness!"

        That would be apps coded by cowboys then?

        1. Anonymous Coward
          Flame

          Apps coded by Adobe - more likely

          Google for Linux, Adobe flash, 64 bit. One of the things which got discovered when supporting adobe sorry attempts to go 64 bit was that they were doing memcpy on overlapping ranges. With a coding style like that even changing a few things in the underlying libraries will topple the bugware. Rebuilding for a different arch? - Forget it.

  4. dogged
    WTF?

    wait, hold on..

    If you recompile OSX for ARM64 and you keep the APIs identical, why would you need to emulate anything for additional software?

    This isn't 1995. We don't target the hardware directly anymore. That's the whole point of HALs, and in fact, Macs have always been a bit like the Catholic Church in that you need the OS (priest) as an intermediary to talk to anything important.

    Colour me slightly confused.

    1. Ashley 2
      Thumb Up

      Remember they once ran on PowerPC

      Unless I am mistaken, Apple's OS once ran on PowerPC, so they are no strangers to supporting different CPU architectures with the same API.

    2. bhtooefr
      FAIL

      Because the HAL doesn't abstract the CPU instruction set?

      Obviously, Apple will make the migration easy for new software - potentially as easy as a recompile, if there's no inline x86 assembly, but an existing binary can't run on an ARM system without resorting to x86 emulation.

      1. DrXym

        They'll use LLVM if they're smart

        9 out of 10 of apps really don't care what architecture they're running on.

        If Apple are smart they'll offer a LLVM compiler target in OS X. i.e. the app wouldn't be compiled into x86 instructions or ARM instructions, they'd be compiled into LLVM bitcode. At runtime the OS would compile the bitcode into a native binary and cache it somewhere for subsequent execution. It would mean the app would work any supported architecture - ARM, x86, anything. It would mean no more fat binaries, no more worries the next time the OS moves again.

        LLVM is an incredibly powerful abstraction layer and I suspect Microsoft will have to do something similar.

        1. ThomH

          @DrXym

          I think you're right; with Clang now fully capable of C++ and Objective-C++, they've switched to a Clang/LLVM pair for Xcode 4, to power not just the compilation stage but the static analyser, the as-you-type error highlighting, and a bunch of other workspace things.

          At present they're pushing all the way to a native binary, but it feels like it'd be a relatively trivial step from here to build LLVM into the OS and move towards LLVM bytecode generation being the default in the developer tools.

          1. chuckc
            Thumb Up

            @ThomH

            Yes, I agree with that too. In fact, they already do this for parts of their OpenGL implementation, which can generate code on the fly for the appropriate target (CPU, GPU), so it wouldn't be too hard to push it towards apps as well.

            I for one welcome LLVM on Xcode 4, it's fast and the static analyzer is great, the project itself shows a huge amount of promise for the future and it may well be one of Apple's best decisions along with KHTML/WebKit.

        2. Daniel B.

          @DrXym

          MS already have their own thing, MSIL. .NET stuff can be compiled to this, which theoretically runs on any arch. Of course, I haven't tested tis beyond x86 and amd64, so the ability to run in a truly different arch is still in the air...

          1. Ken Hagan Gold badge

            Re: still in the air

            MS have offered managed code for ARM-based devices for many years, so actually I think the whole idea is completely "grounded" in reality.

            The problem is that there's little incentive for vendors to make it easy for customers to move to a new arch. In the closed source world, most vendors would prefer if the customer "upgrades" when they switch. Witness the number (a minority, but not an insignificant one) who offer new versions when a new version of Windows comes out, and *that's* for the same processor arch and after MS have bent over backwards to ensure full backwards compatibility.

            OTOH, since Jobs has all the third party vendors over a barrel with his AppStore (tm?) the situation may be different for Apple.

    3. kissingthecarpet
      Coat

      So

      Steve Jobs is the pope then...

      I hope the analogy with Catholic priests stops there - I'm sure Apple don't want to limit their OS to over-18s only to avoid any unpleasantness.

  5. Anonymous Coward
    Thumb Up

    Why not?

    It's only Windows that keeps most vendors tied to x86 (exceptions apply, e.g. Dell in the past had other $$$ reasons for staying with Intel but maybe both partners learnt their lesson wrt fraudulent accounting).

    x86 is already largely irrelevant in the non-Windows market, and apparently even MS are smart enough to see that if there isn't a Windows/ARM combo soon, they may be in trouble (even if the MS-touted combo is only a negotiating tactic).

    Apple are not in the Windows market, they've already changed platform more times than a late arriving train at Euston, it makes sense for Apple to look at ARM, especially for notebooks.

    Go ARM.

  6. Zolko Silver badge
    Thumb Down

    ARM has no 64-bit plans

    ARM said they won't do 64-bit, but some extended memory address mechanism. (40-bit ?)

    http://www.pcworld.com/article/216472/arm_ceo_pc_market_not_our_target.html

    "we've decided it's not been sensible to have 64-bit programs. Extended memory addressing at 40 bits is in the latest Cortex-A15 ... but we haven't had the need for a 64-bit [arithmetic logic unit]."

    1. Ken Hagan Gold badge

      "not been sensible to have 64-bit programs"

      Outside applications like databases and video editing, this is true for x86 as well. x64 code is larger and consequently slower in most cases, delivering a net penalty to end-users. Microsoft have been strong-arming developers to do Win64 ports for a decade now with only limited success. Even their own Office division *recommend* that OEMs ship the 32-bit version, even on a 64-bit OS.

      1. Anonymous Coward
        Gates Horns

        M$ and Intel promoting bloatware?

        Who would have thought that Intel could seek to benefit from selling hardware that people don't really need with the help of their buddies at Redmond...

  7. Anonymous Coward
    Happy

    How much processing power do you need?

    It will almost certainly be the case that an Intel processor in 2013 will be more powerful that an ARM chip, however if the computing power of ARM chips continue to rise as they have been over recent years, then they will provide ample processing power for the vast majority of it's customers.

    If this is the case, then the argument for using a more powerful, but power hungry Intel chip becomes somewhat moot for all but the most extreme laptop users. And for those (for arguments sake) running say a Teradata install on their laptops, I'm sure Apple will provide a suitably price upgrade option for a Intel chip.

  8. DrXym

    Backwards compatibility

    Users won't care about the processor driving their laptop assuming their existing apps all run on it. That means it has to have strong emulation. Without that, I see Apple being stuck in the same boat as Microsoft with their ARM aspirations. Yes the larger companies will make ARM fat binaries for their customers but legacy apps won't work and neither will some smaller apps.

    Of course being Apple perhaps they'll "helpfully" remove all free will from owners of such laptops and force people to obtain them through the Mac App Store where they will only be able to install the apps that they have presented to them.

  9. John Riddoch

    FPU?

    I thought that ARM didn't have any FP support in the chip (or FP support was poor), part of the reason it was so power efficient? That's not a problem for people doing web browsing, emails, phone calls, sms etc, but it can be a killer for certain tasks. So, you have the choice of:

    - everything moves to ARM, including high-end workstations - things like Photoshop will struggle and prompt a migration to Wintel.

    - mobile platforms (e.g. Air) move to ARM leaving high end on Intel, and app vendors have to ship two sets of binaries for their apps

    Either way, it doesn't sound ideal.

    1. Luke McCarthy

      NEON

      That is true of older chips. Cortex designs have a NEON vector floating-point unit.

    2. /dev/null
      Boffin

      No FP support?!

      Actually, the Cortex-A series have two different FPUs - VFP and NEON. Not sure how they would compare to the latest Screaming-Sindy-n extensions performance-wise though.

    3. Anton Ivanov
      Flame

      And does it need FP support?

      With a proper GPU from Nvidia or AMD inside and support for using it for generic floating point in the OS... Hmm... What exactly is that FPU performance problem once again?

      The only reason we continue to abuse poor Screaming Syndy for high performance tasks is that code abusing a Tegra or AMD is not sufficiently portable and there is no software emulation for teh cases when GPU/APUs are not present. You never know what you are going to run it on so for a commercial binary executable you end up using MMX/SSE instead.

      With Apple controlling the hardware and OS that assumption is no longer correct. You can be sure that it is present and/or emulated correctly. So the performance may not be such a problem as it seems.

      1. chuckc
        Thumb Up

        Grand Central + OpenCL

        I believe their Grand Central tech goes a long way towards letting the OS decide where to execute code, including GPUs with OpenCL support.

        So using that technology to compensate for the lack of oomph in ARM chippery should not be that hard for Apple.

  10. jeffo
    Thumb Down

    What about Windows?

    A lot of us also need to/like to run Windows on our Macs, either in Bootcamp or as a VM and I think this is one of the big plus points of Intel based Macs. I can't see an Intel emulator running on top of an ARM chip being that responsive.

    1. Anonymous Coward
      Anonymous Coward

      Windows 8....

      ....will run on ARM.

      1. Anonymous Coward
        Anonymous Coward

        Running Win 8 on ARM in a VM is fine, but....

        ... not much use for attracting switchers, or anyone else using x86 based legacy apps on, for example Win XP or Win 7.

    2. Anonymous Coward
      Anonymous Coward

      Hmm...

      Is this not the reason that Apple are looking at making ARM laptops? They know that a lot of their userbase like to/have to run Windows and don't want to actually force them to make a decision either way?

  11. Mark C Casey

    I don't buy it

    It doesn't make sense to run Mac OS X on ARM, there are not only cost issues but performance ones as well as compatibility.

    * Cost, they'd have to get someone to fab a larger higher performing ARM laptop/desktop processor to get the same kind of performance as current Intel/AMD processors. If they're doing that just for Apple that is a monumental cost, whereas currently Apple gets a very nice cost deal with Intel.

    If they're doing it just for a single type of OSX laptop, then that is absolutely insane. The costs to fab the Apple specific ARM processor just for a single line of laptops would be astronomical.

    * Performance, ARM processors are designed with power efficiency in mind. Look, I'm a Brit so "rah rah ARM". But they are not designed for performance and will be absolutely hammered by Intel/AMD in this regard for laptop processors.

    * Compatibility, Apple would have to create an x86 emulator for ARM. That will be a massive hit on the speed and likely battery life as it runs the processor at 100% to try and eke out some semblance of speed emulating an x86 for all those current x86 applications.

    Overall, it doesn't make any sense.

    (I know historically Apple has gone this route before with 68k > PPC > x86, but it really doesn't make sense, as they went that route every time for performance and cost reasons)

    1. mal7921

      Performance? That was the whole point of ARM

      "* Performance, ARM processors are designed with power efficiency in mind. Look, I'm a Brit so "rah rah ARM". But they are not designed for performance and will be absolutely hammered by Intel/AMD in this regard for laptop processors."

      Have you ever run comparable machines side by side? Even at the beginning with the Archimedes, ARM chips kicked the normal desktoip class processors clock for clock, as an example in 1988 if you took a Mac, an Atari ST and an Amiga and put them up against the Archimedes A305 (The bottom of the range machine), all running at around 8MHz, the Archimedes left the other machines for dead due to the processor architecture.

      There are numerous reasons why the Archimedes didn't take over the world, including price, lack of compatability with current and emerging standards at the time, and Acorn's relative obscurity outside of education, especially in the UK, but performance was not an issue.

      As for cost of a FAB, there are more ARM chips manufactured every day than any other processor, and there is no reason why an Apple chip cannot be manufactured alongside someone elses chip as the core processor is the same. ARM Technologies own the processor, but other people make and distribute it for them.

      Ok, the point on emulation is pretty valid, though did PPC emulation really hit pattery power on a MacBook or MacBook pro? I didn't notice much if any difference, if anything XP under boot camp hit performance the most (And even then not by as much as many people claimed at the time).

      IT would be good to compare a 2.4GHz 64 bit ARM chip with a 2.4GHz 64 bit intel chip, but as only one of those exists for the moment, we can only speculate on the outcome.

      1. Ken Hagan Gold badge

        1988 was a long time ago

        As the article notes "Just as Intel has yet to prove its x86 chips can match ARM for power efficiency in mobile devices, ARM has yet to show it can match Intel - and AMD - chips for sheer compute performance"

        I'm told there is a fairly fundamental reason for this. For any given process technology, you can design for half the performance and expect to run at about a tenth of the power. Put another way, the second 50% of your performance will cost 90% of your power. Historically, Intel have found their profits by targetting performance and ARM has found its niche by targetting power consumption. We have seen that Intel's power ratings have improved in recent years as they've started to explicitly target ARM's market and I expect we'll see the same in reverse as ARM start to target Intel's market.

        1. bazza Silver badge

          Yes, but not quite

          That's true if you ignore the efficiency of the instruction set and hence the number of clock cycles needed to perform a given task. X86 is terrible - not it's fault, ancient and of its time 30 years ago, and Intel have worked wonders keeping it going this long. But the ARM instructions set is much more efficient (it is RISC after all), so clock for clock, transistor for transistor ARM will normally outperform X86. Intel might have some advantage in floating point performance, but with codecs being run on GPUs / dedicated hardware, who really does much floating point calculation these days?

          You can see some of the effects of X86 from the performance Intel have managed to extract from Atom. That is, not very much. And all for more power and less throughput than ARMs of a similar clock rate are achieving.

          1. Anonymous Coward
            Boffin

            Re: Yes, but not quite

            "But the ARM instructions set is much more efficient (it is RISC after all), so clock for clock, transistor for transistor ARM will normally outperform X86."

            I'm not arguing that the ARM instruction set is cleaner and adheres well to the original RISC philosophy, but CPUs supporting x86 dedicate hardware to translating instructions to the RISC instructions of the underlying CPU core. The AMD Am29000 (http://en.wikipedia.org/wiki/AMD_Am29000) is a good example of where these worlds collide, ending up in x86-oriented CPUs.

            1. bhtooefr

              And, RISC isn't the be-all, end-all...

              ...well, at least if you use the, you know, "reduced instruction set complexity" definition, rather than the "load-store" or "all instructions are the same length" definition.

              The fastest "RISC" processors nowadays aren't RISC by any definition relating to how complex the instruction set is.

              If you're dispatching micro-ops, your ISA is no longer RISC, in my opinion.

              But, that's not a bad thing - an instruction that turns into several micro-ops (assuming that instructions are the same length, which is true on ARM (except for Thumb), or that the longer instruction isn't too much longer, which is almost always true on x86) uses less memory than the same task implemented as multiple instructions that translate directly to micro-ops. Using less memory means that the instruction gets loaded into the caches quicker, and it uses less cache (except for micro-op decode cache).

              All of this means that you don't need absurdly fast memory bandwidth to get good performance out of a CPU that uses these techniques, and you can use a little less RAM. This is why x86 machines could be fast in real-world use, despite atrocious synthetic memory benchmarks compared to various absurdly expensive RISC workstations.

              Fun fact: ARM Cortex-A8 is no longer RISC, by my definition - the ARMv7 ISA includes some multiple load and multiple store instructions that are broken up into individual load/store instructions in the CPU. The micro-op instruction set is still ARM in an ARM CPU that uses micro-ops, and most instructions do still map 1:1 with an internal instruction, but there are a few that are broken up.

              (Also, the Thumb decoder dispatches ARM instructions, but I believe every Thumb or Thumb-2 instruction maps 1:1 with an ARM instruction, so it's not really micro-ops, there.)

          2. Ken Hagan Gold badge

            Re: Yes, but not quite

            Instruction decode is about 2% of a desktop CPU's area. x86 has higher code density than ARM, unless you use thumb, at which point it is comparable but similarly register-starved. Sorry, but I just don't see the greater "efficiency" of the ARM's instruction set.

            In any case, clock for clock or transistor for transistor comparisons don't count. You need to consider the whole product. For example, Intel were never able to clock Itanium chips as fast as they did the Pentium 4, and the latter were then out-performed by slower-clocking successors.

            1. bazza Silver badge

              @Ken Hagan

              Well, transistor for transistor and clock for clock comparisons do count. The ARM core, even today, is still about 32,000 transistors. Intel won't tell us how many transistors there are in the x86 core (just some vague count of the number on the entire chip), but it's going to be way more than 32,000. So if you're selling a customer Nmm^2 of silicon (and this is what drives the cost and power consumption) you're going to be giving them more ARM cores than x86 cores.

              Then you add caches and other stuff. On x86 there is a translation unit from X86 to whatever internal RISCesque opcodes a modern x86 actually executes internally. ARMs don't need that. X86 has loads of old fashioned modes (16bit code anyone?) and addressing schemes, and all of that makes for complicated pipelines, caches, memory systems, etc. ARM is much simpler here, so fewer transistors needed.

              What ARM are demonstrating is that whilst X86s are indeed mightly powerful beasts, they're not well optimised for the jobs people actually want to do. X86s can do almost anything, but most people just want to watch some video, play some music, do a bit of web browsing and messaging. Put a low gate count core alongside some well chosen hardware accelerators and you can get a part that much more efficiently delivers what actually customers want.

              That has been well known for a long time now, but the hard and fast constraints of power consumption has driven the mobile devices market to adopt something other than x86. No one can argue that x86 instruction set and all the baggage that comes with it is more efficient than ARM given the overwhelming opinion of almost every phone manufacturer out there.

              On a professional level needing as much computational grunt as I can get, both PowerPC and x86 have been very good for some considerable time. ARM's approach of shoving the maths bits out in to a dedicated hardware coprocessor will do my professional domain no good whatsoever! It's already bad enough splitting a task out across tens of PowerPCs / X86; I don't want to have to split them out even further across hundreds of ARMs.

              1. Ken Hagan Gold badge

                @bazza

                "The ARM core, even today, is still about 32,000 transistors. "

                That's no FPU and no SIMD instructions then.

                "So if you're selling a customer Nmm^2 of silicon (and this is what drives the cost and power consumption) you're going to be giving them more ARM cores than x86 cores."

                No-one sells square millimetres of silicon. They sell CPUs and these days they sell CPUs with multiple cores, but not too many because you simply can't get the data on and off fast enough to make it worthwhile. Look at Larrabee or Cell. These remain niche products because the bottleneck hasn't been CPU speed or size for some time.

                "Then you add caches and other stuff."

                Indeed. A modern desktop computer is a cache with an ocean of slow memory on one side and an excess of processing power on the other side. Your 32000 transistor core is going to be clocking at a few tens of megahertz (DRAM speeds) unless you spend about a million transistors on L1 and L2 caches.

                "On x86 there is a translation unit from X86 to whatever internal RISCesque opcodes a modern x86 actually executes internally."

                Actually this is an urban myth. There *are* a few x86 instructions that bail to microcode, but apparently CISC-y things like "add eax,[ecx+edx*8]" are implemented fully in the processor pipeline. The address generation stage has its own ALU and the argument fetch stage can talk to the L1 cache. In effect, x86 *is* the internal RISCesque opcode set.

                "ARMs don't need that."

                But if they are to get close to x86 performance, they'll need out-of-order execution, which will blow your 32000 transistor budget all the way to Pluto. This is particularly true because the ARM would require multiple instructions to accomplish the "add" instruction mentioned earlier. That's multiple live (architected) registers and multiple trips down the pipeline. If those aren't allowed to run OoO, you'll need to clock at some multiple of the Intel chip to keep step, and power consumption goes with the square of clock speed.

                "X86s can do almost anything, but most people just want to watch some video, play some music, do a bit of web browsing and messaging. Put a low gate count core alongside some well chosen hardware accelerators and you can get a part that much more efficiently delivers what actually customers want."

                Which is great until the world starts using different codecs, which it does every few years. Then you start wondering if it wouldn't have been smarter to spend the same transistor budget on making your general purpose CPU a little faster. Or smarter still to save on the R+D of those units altogether (which you'll have to claw back by selling the final product at a premium) and buy an off-the-shelf solution from Intel.

                "No one can argue that x86 instruction set and all the baggage that comes with it is more efficient than ARM given the overwhelming opinion of almost every phone manufacturer out there."

                Phones are a very specialised segment. You can get away with a fixed number of codecs, hard-wired, and there's very little other processing to do, so a feeble ARM core is a good design choice. A feeble x86 core would be good too, but Intel simply don't offer one and so we arrive at the present market segmentation for largely historical reasons.

                OTOH, for a desktop core, instruction decode is a few percent of chip area these days, so in *that* market, what you describe as "baggage" is actually lost in the noise.

                Within living memory, Intel have tried to replace x86 with something they designed to be intrinsically better. It didn't make enough of a difference to be measurable. They've also made ARM chips so if there was anything intrinsically better in *that* ISA, they'd presumably know about it. The evidence suggests that x86 just isn't bad enough to measure, let alone matter, except at the absurdly low end of the market and with "devices" getting more and more powerful each year, that's an end of the market that is disappearing.

                In fact, you could say that ARM is moving up-market simply to stay in existence. Perhaps in 5 or 10 years time we'll look at tiny ARM chips the same way that we look at the 8042 chip. The ARM started as the CPU for a full-blown computer and then found its niche for a decade or so in less powerful products, eventually fading out of existence as even those products evolved to require increasing amounts of processing power.

                Or maybe it is the desktop (and the x86) that will be replaced by tablets (with ARMs in them for largely historical reasons).

                1. Anonymous Coward
                  Anonymous Coward

                  Re: @bazza

                  "But if they are to get close to x86 performance, they'll need out-of-order execution, which will blow your 32000 transistor budget all the way to Pluto."

                  I don't think bazza was arguing for 32000 transistor CPUs. The point was that the modest transistor budget can be spent on other things: more cores per die, more cache, specialised stuff, whatever. Steve Furber was last seen putting thousands of cores on a die - something that you'd struggle doing with any recent x86 offering.

                  You can argue about whether any of the above is sensible, but the point is that you get to do it if you haven't had to commit millions of transistors already. And does that flexibility matter? Of course it does: why do you think ARM has been so successful in the embedded space?

                2. chuckc
                  Thumb Up

                  @Ken Hagan

                  All said matches my biased expectations. If ARM wants to get performance that even comes close to what intel has now, they will have to implement everything that intel has already stuffed into their chips for years, among others out of order execution (isn't the A9 or the A15 out of order already? while the Atom is in order?), and at that poin they will loose all their power advantages, regardless of how wonderful their instruction set is.

          3. chuckc

            Ars seems to disagree...

            ... if you want to come close to intel's performance, there's no "magic dust" that ARM can sprinkle, and power consumption will go up regardless of ISA.

            http://arstechnica.com/apple/news/2011/05/apple-could-adopt-arm-for-laptops-but-why-would-it.ars

      2. Anonymous Coward
        Boffin

        Re: Performance? That was the whole point of ARM

        "Even at the beginning with the Archimedes, ARM chips kicked the normal desktoip class processors clock for clock, as an example in 1988 if you took a Mac, an Atari ST and an Amiga and put them up against the Archimedes A305 (The bottom of the range machine), all running at around 8MHz, the Archimedes left the other machines for dead due to the processor architecture."

        This may be true, although the A305 had the same CPU as the top of the range A440 upon the Archimedes series' introduction, so the "bottom of the range" label has only rhetorical value.

        "There are numerous reasons why the Archimedes didn't take over the world, including price, lack of compatability with current and emerging standards at the time, and Acorn's relative obscurity outside of education, especially in the UK, but performance was not an issue."

        Actually, a few performance-related things came up: lack of hardware floating point support (solved initially using an off-the-shelf coprocessor solution, finally remedied using a from-scratch coprocessor which was eventually integrated into some CPUs which meant that the 48MHz ARM7500FE in an A7000+ was quite possibly faster than the 200+MHz StrongARM in a RiscPC, but still arguably not competitive with other CPU families); issues with page sizes and address translation tables in systems supporting virtual memory. In addition, the relatively slow iteration period and ARM's refocusing on other customers (notably Apple with their Newton perhaps with various internal projects that never made it to market) meant that beyond the ARM3, the competition was able to close the gap.

    2. JEDIDIAH
      Linux

      I don't buy it either.

      The major pain point is performance. This becomes readily apparent if you try to use non-supported video formats with an iPhone or AppleTV. The claim in the article that no one would notice the difference is just mindless fanboyism.

      The iPhone and the AppleTV are sufficiently locked down that most people are unable to run up against these performance limitations. That's not the case with a general purpose machine and applications that will try to exploit every cycle that the platform has to offer.

      Compatibility is also a major issue, especially for proprietary platforms where most of the common tools are not available as source code anyone is free to start the porting effort.

      1. bazza Silver badge

        @JEDIDIAH

        Yes you are correct, and indeed users of other sorts of phones don't run in to performance limitations either.

        What the market place is clearly showing is that most people don't want general purpose computing, at least not beyond a certain level of performance. Afterall, almost any old ARM these days can run a word processer, spreadsheet, web browser and email client perfectly well, and hardware accelerators are doing the rest.

        Intel are clinging on to high performance for general purpose computing, and are failing to retain enough of that performance when they cut it down to size (Atom). ARM are in effect saying nuts to high performance and are focusing only on those areas of computing that the majority of people want.

        Those of us who do want high performance general purpose computing are likely to be backed in to a shrinking niche that is more and more separated from mainstream computing. The high performance embedded world has been there for years - very slow updates to Freescale's PowerPC line, Intel's chips not really being contenders until quite recently and even then only by luck rather than judgement on Intel's part. It could be that the likes of nVidia and ATI become the only source of high speed maths grunt, but GPUs are currently quite limited in the sorts of large scale maths applications that work well on them and aren't pleasant or simple to exploit to their maximum potential. Who knows what the super computer people are going to do in the future.

        1. JEDIDIAH
          Linux

          Silly fanboy nonsense.

          > What the market place is clearly showing is that most

          > people don't want general purpose computing, at

          So people have stopped buying all of those cameras that have video formats that will make an iThing choke? They're giving up BluRay and DVD too? Don't think so.

          While there are plenty of people willing to buy limited SUPPLEMENTAL devices, there's no real indication yet that people are willing to completely give up some means to deal with whatever content and devices are out there.

          Sometimes you want to do something there isn't speciality silicon for. You don't even have to be that geeky to want such a thing either. Apple shills are trying to redefine "geeky" while ignoring Apple's own marketing history.

  12. Ashley 2

    Another choice.

    You presented two possible choices here:

    "But then it either has to convert the range into iOS machines, to run existing apps, or develop yet another emulator to allow new Air buyers to run their existing OS X apps. Forcing them to re-buy new, ARM-compiled versions of apps seems a very unlikely strategy."

    1) Emulate existing apps

    2) Pay for recompiled apps

    If the ARM user market is big enough to encourage software houses to support it, a further, more palatable option, is that you pay once and run the version compiled for the CPU you happen to be using.

    1. Frumious Bandersnatch

      third option

      http://en.wikipedia.org/wiki/Fat_binary

      Of course, this doesn't help with existing apps (do I have to put a TM after this since it's an Apple (TM) article?) which would need to be recompiled and someone ultimately has to pay for the recompilation. But at least in principle it doesn't strike me as being too difficult a transition for people to make. The wiki page has more useful observations which would seem to be quite relevant to this article.

  13. Anonymous Coward
    Anonymous Coward

    The key

    Many of you questioned the usability of a Mac App store, but if this thing goes through, imagine the possibilities.

    Apple is already hard at work forcing developers to go through the App Store. Design awards are only handed out to App Store programs, boxed software will soon disappear from retail. If Apple pushes OSX 10.7 through the App Store, not even Microsoft or Adobe could claim that it would be unsuitable for delivering 'mature' applications.

    Say that you are on a MacBook Pro x86, and you buy a new MacBook Air running ARM. Simply log onto the App Store, and all your apps compiled for ARM will download. Because Apple already has all Apps on their servers, a re-compile will not cost them anything. With Castle, presumably even your preferences will carry over.

    So yes, if Apple were to do this, any software issues will be non-existent.

    1. MD Rackham
      WTF?

      That doesn't make any sense...

      ...unless you assume all software is written by Apple.

      Even if Adobe and Microsoft sold their software via the App Store--and the App Store rules were changed to make that even possible--that would give Apple the ability to re-compile anything.

      And anyone who says that "all you have to do is re-compile" is a blithering idiot who knows absolutely nothing about software development--and, yes, I include Mr. Jobs is that. Oh, and has a really short memory too, apparently forgetting how long it took to make the 68K to PPC and PPC to Intel transitions.

    2. Stuart Castle Silver badge

      ok

      So, Apple are going to recompile old versions of their software for no profit? Microsoft and other software companies are going to do the same? Somehow, I doubt it..

      No. Assuming Apple do this (and bearing in mind the boost switching macs to x86 gave to the sales of the mac, I'd be staggered if they did), they would need to write an emulation layer which would almost certainly cripple performance, regardless of the perceived advantages of ARM.

  14. Stephen Booth

    Long game going on here

    Even if it does not make sense in the short term it is not suprising that Apple is looking seriously at ARM. Not becasue it is obviously better for the users but it is better for Apple.

    Apple is still primarily a hardware company but the ongoing trend in hardware is for more and more of the important hardware to be combined into a single device package. Any company that sticks with Intel are going to end up putting a cosmetic case around the Intel package as all their competitors. This makes it hard to be different enough to charge much of a premium for your hardware.

    Things are much easier if you license ARM and build your own processors. Even if most of the hard stuff comes from ARM you still have plenty imput into the design and you can take that oppertunity to make sure your software won't run on 3rd party hardware.

    Now I know that Apple are the world leaders in getting people to pay a premium for hardware that is not very dissimilar to everyone elses but there must always be a risk that they might get landed with a legal ruling unambiguously legalising the hackintosh. The more markets they can move to the iphone/ipod modle the better from their point of view.

  15. jubtastic1
    Go

    Size and battery life

    Two things Apple cares deeply about with regards to portable computers and in which ARM solutions beat x86 hands down.

    So long as the ARM chip provides somewhere close to the performance of the ageing C2D chips in the Air line-up this is a bit of an obvious play, with the App store taking care of the relevant compiled binaries being delivered for differing architectures as others have argued.

  16. Michael C

    Not ANY time soon

    Even with 64bit arm chips (next year), including quad cores and higher end GPUs capable of handling what people expect from Apple (iLife), Apple is not in the business of compromising the performance of their machines to fit a niche market. ARM doesn't support TB, doesn't support Display Port, doesn't have a SATA interface, and so much more.

    Can they make OS X run on ARM? yea, iOS is a port of OS X itself... Can they do it in a compelling form factor, with compelling performance, comparable top their MacBook base model or Air? Well, since the price offset between the ARM and the i3 is about $25, and they might remove 10-20% of the battery in the process saving maybe an additional $10, I don't think they could reasonably shave moire than $100 total off the price of the machine, and it would fall far short of a full macbook performance. This might be usable for a dual-boot tablet sometime in late 2012/mid-2013, giving some limited access to basic OS X apps, but again, it might ride that price up enough to not be relevent, especially if still limited to tablet resolutions on the screen. At the same time, macbook prices are falling, and it;s reasonable we could see a $700 macbook by that point, if not less.

    Yea, they're persuing it, apple keeps their options open, we found that out at the intel launch that they had from DAY ONE built and compiled every single piece of code on at least 3 platforms (Power, Intel, AMD, and a hint there were others too), so then going strong on ARM is a given, but will it result in a product? Not on current ARM architecture. When ARM is significantly more powerful that Atom, and can still be cheaper and more power efficient, it might replace the lowest end machines, but only if X-Code can cross-compile ANY app with a few clicks, otherwise those using ARM OS X may have a dramatic disadvantage in software availability, and emulating X64 on ARM64 just isn;t going to work well.

    1. Dennis Healey
      Pint

      Re - not anytime soon

      .... or apple could make sure they have enough ARM cores and encourage developers to use them, maybe dedicating them to specific tasks. They could be driving change here.

      Increased battery life is always useful, particularly if your competitors don't have it.

      Reducing the temperature means less energy wasted cooling redundant heat production and therefore increasing battery life

      Any cash saved could be put towards some flash memory (increasing access speed & economising on power). if you still need a HD this could be switched off as it would be lower level storage than the flash memory.

      This sound to me like quite an attractive machine

  17. Anonymous Coward
    Welcome

    "doesn't support TB, doesn't support <pointless list>"

    "ARM doesn't support TB, doesn't support Display Port, doesn't have a SATA interface, and so much more."

    Er, you do realise that ARM defines a chip architecture and that ARM licencees are free to put on it whatever peripheral frippery they wish, do you? And ARM licencees have been doing exactly that for such a long time that Intel are unlikely to EVER catch up in the low-wattage market sector with any x86-based product, neither on performance (speed, wattage) nor price (except Intel can loss-lead with their price, though that on its own may not particularly hurt Intel).

    "it might ride that price up enough to not be relevent"

    This is Apple. If Apple build it, the punters will bow down and buy it, even if it sucks.

  18. Anonymous Coward
    Anonymous Coward

    App store

    No need for an emulator. The App Store model means that the correct binary will be delivered to the machine. You'll be able to switch from x86 to ARM machine and when it connects it will download all the same apps you had bought from the Apple App store previously onto your new ARM machine. That is the great thing about the App Store, it removes the requirement for binary compatibility. By then most apps will be coming from the App Store. For those that bought outside the App Store though you're SOL probably!

    1. Volker Hett

      No!

      I don't expect Microsoft Office and Adobe to deliver anything usable on non X86 chips.

      I hava a laptop for Adobe Lightroom and Microsoft Office and an iPad for mail, calendar, contacts, notes and the occasional game.

    2. JEDIDIAH
      Linux

      App store Schmapps store.

      You don't need a poor copy of apt-get to manage different hardware architectures for the same app. You can simply package them together and let the installer logic sort things out. Or you could even use fat binaries, but that wastes a lot of space on devices that don't really have any to spare. Either way, it's not a terribly difficult problem.

      1. A J Stiles
        Linux

        One application, many architectures

        I have come up with a way to make Ubuntu / Debian .deb packages which are installable on multiple architectures with minimal bloat. (The technique probably could be modified for .rpm packages too, but my familiarity is with Debian.)

        In order to make a convincing Declaration of Prior Art and so prevent Apple or anyone else from patenting this, let's just say that it relies on using postinst in a rather *ahem* creative way -- but one that should, nonetheless, be blindingly obvious to anyone who understands the gory details of .deb packages.

  19. Giles Jones Gold badge

    Is there a problem with x86?

    I'm pretty sure this is the whole hardware industry not wanting to be at the mercy of Intel. It doesn't really have much to do with the chips being good or bad.

    With ARM being an IP company licencing designs it means each computer manufacturer can build integrated systems around an ARM design.

    Okay, x86 is a mess of legacy backward compatibility but it has worked until now.

    1. Daniel B.

      Yes, there is.

      X86 is shit, has always been shit, and the computing industry as a whole has taken a huge step back by sticking to the damned architecture. PPC, ARM, and basically every single RISC arch outperformed x86 by a lot; Intel then began jacking up the CPU clock 'till it matched the other processors. Of course, this means that current x86 suck a lot of juice, and run really hot compared to their RISC brethren.

      The computing industry is now running stuff in the computer equivalent of a VW Beetle running on aviation fuel. Sure it can run real fast, but it's taxong on the engine!

  20. sT0rNG b4R3 duRiD

    A new architecture?

    I'll believe it when I see it but I do not believe there are ARM chips out there that quite match intels apart from the atoms.

    I would think ARM would need a better reference design (probably 64 bit, probably out of order, probably multi-core/threaded) to compete with intel.

    Doesn't POWER already have all this?

    Is it easier get something like a POWER or a G5 design adapted for whatever Apple have in mind than scaling up an ARM chip?

    AFAIK no out of order ARMs exist yet... So it'll have to be a totally new chip anyways.

    Why not bring back PowerPC's? Didn't Semi work on PowerPC's before too?

    1. Ilgaz

      IBM stays the hell out of end user market

      Of course it is not IBM couldn't deliver a 3Ghz G5 (check POWER5+ speeds), it is simply IBM didn't care for end user things. It is their corporate system.

      Consoles are really different business but you can check how easy it was for MS to steer their own chip (similar to 970) to give excellent performance for xbox 360. Or Sony with the Cell.

      POWER isa (much like ARM) is currently focused on enterprise servers, advanced scientific computing and game consoles. I think IBM got their lesson with G5 and figured they can never compete with x86 idiocy on desktop/laptop. I mean ship a light POWER7 running 5 ghz to market now, people/media will still talk about buggy sandy bridge.

      For similar reasons, ARM will have problem on desktop too. Some people think Adobe will ship a Photoshop CS5 just with a recompile. Well, we (powerpc users) learned that it isn't working that way.

  21. Doug 3

    just need to match perf on laptops

    the ARM solutions only have to match the performance of the Intel based laptops, not all of Intel's CPUs. With ARM chips already hitting 1.5GHz the current die shrink methods show it'll hit 2GHz in no time. Pair that with how small and power efficient multi-core ARM chips are as opposed to the best Intel can do you should see why Apple might go this route.

    And if I were Apple and have seen demo's of what these multi-core high speed chips can do and had a tablet hardware they wanted to merge with their desktop over a few years then this is a no-brainer. ARM is already scaling up and it's pretty obvious that the future of CPU design is the multi-core method it makes sense and cents. I'm not much of a fan of Apple but they don't seem to be on the wrong path for profits and efficiency very often. Add their App Store for desktop and laptops and you also have a way to get ARM or x86 based apps to customers as needed without the customer knowing which one they need since the App Store infrastructure will figure that out and deliver the correct version for your hardware.

    I also recall recent news of Apple teaming with Intel to build their ARM chips instead of Samsung doing it. If that means using Intel's new processes( 32nm, 22nm or even 3D 22nm ) then there's lots of things the faster smaller ARM chips will do.

    It would be great to finally start to see ARM laptops as long as their boot systems are not locked to the OS. I also wonder how much Google ChromeOS has to do with all this since it could be what's triggered Microsoft Windows for ARM and that's triggered all the new interest in ARM in PC sized devices.

    1. JEDIDIAH
      Linux

      Fanboys arguing against an outdated view of the opposition.

      > the ARM solutions only have to match the performance of the Intel based laptops,

      Which includes Sandy Bridge.

      Intel doesn't exactly stand still either.

      ARM can't even match Atom and that's the stuff that other PC users snicker at. ARM has a very narrow area of appeal. Beyond that, it has no hope of competing against the PC on it's own terms.

    2. Anonymous Coward
      Thumb Down

      Tablets are slow

      Tablets are wonderful and amazing (I have an iPad) but come on, fast and powerful they are not. Compare a tablet objectively with any laptop made in the last 4-5 years and it's no contest.

      Don't be fooled by GHz--2GHz for an ARM is like 1GHz for Intel (non-Atom). Those 2GHz you're hoping for from ARM will get them to the point of being slightly faster than a basic Atom netbook.

  22. Anonymous Coward
    Anonymous Coward

    Much too slow

    Look at the benchmarks. Per MHz, per core, ARM is ~1/2 as fast as a Core 2 at integer workloads and ~1/5 as fast at floating point.

    Remember that many people consider the performance of the 11.6" MacBook Air (1.4GHz Core 2 Duo) unacceptable. Switching to ARM would make it less than half as fast. How would that work?

    Intel already fought the CISC vs. RISC war 15 years ago and won. There is nothing magical about ARM, it's just 20 year old processor technology manufactured with a modern process. If they want competitive performance with Intel they will have to increase transistor count and power consumption to the point where there's no difference between ARM and Intel. Just like IBM, DEC, HP, SGI, etc. did a decade ago.

    Not saying anything bad about ARM, it's great to have in a phone or a tablet, but why would you want one in your desktop or laptop?

    1. Charlie Clark Silver badge

      Slow

      Actually I've rarely heard people complain about the speed of a Mac Book Air. They do rave about battery life, weight and portability. And, because it's so light, they can also take an iPad along for browsing and e-mail for which the ipad is powerful enough. These customers generally don't need to worry about the price of devices.

      But when it comes to power: nVidia have publicly announced that they expect to match x86 chips for performance with their summer releases. They have fabs and can offer GPU integration for SoC that will definitely outperform Intel's own SoC. Even adding a hardware x86 emulator to the chip isn't a problem so that existing apps will continue to run will be possible because the ARM designs excel at hardware specialisation and this is where most of the power performance gains against Intel's silicon can be made.

      64 bit apart from memory for > 4 GB RAM is a red herring for consumer devices. Again it is the hardware extensions that will make things zip along and Apple already supports off-loading calculation intensive tasks to nVidia's CUDA architecture. ARM also makes multi-core more interesting: multi-task programs on different cores running at different speeds. An Apple with its own chip designers can probably contribute some expertise to an area which would mean easier to assemble systems - Apple TVs with a screen and big batteries and profit margins.

      Apple now has several years' experience of cross-compiling the OS X core and applications (mail, browser, etc.) across x86 and ARM but as there have been no indications of ARM builds of Lion I guess we are unlikely to see a "Mac" branded product using ARM chips this year. However, we may well see an ipad pro or an ARM-based ibook for people who like the ipad but want to be able to do a little bit more than word-processing on it.

      Apple will still want to segment the market so that any device it releases does not cannibalise the still very successful notebook line until it feels it has the chippery for a full migration. Though downward pressure on tablet pricing should help here.

      1. sT0rNG b4R3 duRiD

        Not dissing the arm...

        I hope to see the ARM grow...

        But like I said earlier, there's probably an architecture already mature enough for this market that perhaps just needs to be adapted/refreshed a little to fit in... POWER/ppc.

        IBM has to keep this line up to date eventually (I'm sure its console sales are not small). Bit of pressure on them, perhaps?

    2. famousringo

      It's not the speed...

      It's the speed per watt.

      Wikipedia tells me that the most efficient C2D chews up 10W at 1.6 GHz clock. Google tells me that a dual-core A9 can offer a 1.6 GHz clock with a 2W power draw. Make that a quad-core A9 and you have something with roughly the same integer performance as a C2D and less than half the power requirement.

      As long as the end result is snappy enough for the user, you can build a computer that's lighter, smaller, and runs longer. If you can keep that ARM cool without a fan, you can save even more space and power.

      1. Anonymous Coward
        Anonymous Coward

        Quad core != twice as fast as dual core

        "Make that a quad-core A9 and you have something with roughly the same integer performance as a C2D"

        Yes, when running a quad core benchmark. But not when doing most user-facing computer tasks (single threaded). I'm sure you know that if you sat down in front of a 3GHz dual core it would seem much faster than a 2GHz quad core unless you were doing something like video encoding or 3D rendering etc.

        ARM has a long way to go to catch up to the perceived performance of an Intel-based laptop, and by the time they get there their chips will be big and hot too.

    3. Ilgaz

      CISC won?

      Better check top 500 computers list sometime. Or check a real, big enterprise and see what they use on servers.

      ARM and RISC arch in general is 10x bigger than Intel, for example cpu per person. Computing is way beyond laptops/desktops now and in new way of doing things, it is just ./configure && make to move from one arch to another. That is why Intel is in panic mode, the "win" in wintel scheme sold them off, such crazy rumours can appear etc.

      I got news for you too, check the very inner workings of chips you assume as CISC and 'won', you will be surprised. There isn't pure CISC or RISC anymore. Things became much more hybrid (including kernels, especially darwin).

  23. Anonymous Coward
    Thumb Up

    @AC 18:49: "Look at the benchmarks. " (etc)

    "Look at the benchmarks. "

    Once you give people some specific examples of specific benchmarks on specific ARM and x86 implementations, we can have that discussion.

    "Intel already fought the CISC vs. RISC war 15 years ago and won"

    RISC didn't exactly lose the technical battle although it lost the marketing war. Inside any modern x86 is a RISC-like core being used for executing an x86 instruction set.

    "old processor technology manufactured with a modern process."

    That (leading edge process techonology building a trailing edge architecture CPU) is a perfect description of Intel's recipe for success (to date).

    ARM do not build processors. ARM licensees do, in whatever process technology fits the job.

    There are efficiency features in the basic ARM architecture which x86 cannot incorporate without abandoning basic x86-compatibility (things like code predication for smaller faster code).

    "why would you want one in your desktop or laptop?"

    Because sensible people have no inherent interest in Windows/x86 and the baggage it now brings, only in what they can do with it (or an acceptable substitute). As energy prices go up, we have a significant interest in boxes which are low cost to buy and low cost to operate, and (where applicable) have long battery life. If I can cut these lifetime costs by (say) 30% over three to five years, I'll happily sacrifice the rarely-relevant performance advantage which modern x86 boxes admittedly have; where high end x86 performance (and maybe >>32 bit addressing) is needed, there'll still be Dell.

    So, let's see those benchmarks.

    1. Anonymous Coward
      Anonymous Coward

      Benchmarks

      It is amazing to me that technically inclined people like the readership of El Reg can have spirited debates about processors without knowing anything quantitative about their relative performance other than what clock speeds they run at, which is almost worthless.

      Personally I develop processor intensive software and I can't tell you much about it without revealing the identity of my employer, but it runs twice as fast per MHz on a Core 2 vs. an iPhone 4 (integer) and five times faster (floating point).

      I'm sure you will cry foul on these numbers but I'm confident that they will be confirmed by other benchmarks--right off the bat a Google search reveals "Geekmark" and "EEMBC CoreMark" which both show Intel having at least a 60% integer advantage per MHz per core vs. ARM and it looks like usually more.

      As for Intel having "trailing edge architecture CPU" I think you need to reevaluate--you are right that the instruction set is relatively stupid, but you are also right that since the Pentium Pro that has been [almost] irrelevant because the x86 instructions get translated to something else internally and executed like any other modern RISC core. And Intel has been as competitive as anybody at making those RISC-like cores, doing stuff like trace caches and microop fusion and SMT etc. etc.

      As for energy prices etc., my quad core desktop computer with monitor on and CPU maxed out only needs as much power as a bright incandescent light bulb (~100W). If anything I am impressed with how little power it requires and wish it was somewhat faster--so if you told me I could reduce power consumption by making it 50% slower, well, that's the opposite of what I want.

      1. Wilco 1

        Re: Benchmarks

        The iPhone4 uses the now ancient Cortex-A8, so it is not surprising it is slower than Core 2. The Cortex-A9 is significantly faster, especially on floating point. So if you benchmark your app on the latest ARM CPU, you'll find the difference is now much smaller. Note also that much of the difference in Geekmark is due to memory bandwidth - mobile phones typically have a fraction of the bandwidth found in PCs. In a larger form factor one can use the same memory system.

        You're quite right that modern out of order CPUs all work similarly (although not at all like RISC CPUs). However despite claims to the contrary, the x86 ISA still has a considerable penalty. One can fit several Cortex-A9's in the same space as an Atom core, and each of these runs faster while using far less power. If there was no cost to x86 at all, then surely Intel with all it's technical expertise and best process technology could trivially beat Cortex-A9 on power, performance and die area?

        ARM doesn't need to outperform high-end PCs to make sense in laptops and desktops. It just needs to be fast enough for most people (which might not include you) while showing a significant cost and power reduction. And I would say that we have already reached that state with the A9. I've seen Windows 8 run on dual core Cortex-A9's - it runs pretty well. And Cortex-A15 is due out soon...

        1. Anonymous Coward
          Anonymous Coward

          Benchmarks

          The A9 is not that much faster than the "ancient" A8. Even ARM's web site puts it at only 25% more DMIPS/MHz. I have tested on an iPad 2 and for my software, per core, it only seems to be a couple percent faster than an iPad 1. Please tell us how you're evaluating performance?

          Also, yes, there is an ISA penalty at the Atom level and the Cortex-A9 is better in basically all respects than an Atom. But at the high end, notice that Intel has LONG had similar if not better performance (and usually simultaneously smaller die size and lower power consumption) vs. all high end RISC chips of the same era, e.g., UltraSPARC, MIPS, HP-PA, Alpha, POWER, etc.

      2. Charlie Clark Silver badge

        Didn't realise ARM had got that fast

        "[benchmark] runs twice as fast per MHz on a Core 2 vs. an iPhone 4 (integer) and five times faster (floating point)."

        To be honest that's better than I expected. Now compare size, transistor count, onboard cache and power draw in your benchmark. How long would the battery last for an iphone 4 that used a Core 2?

        That the ARM architecture is suitable for HPC is confirmed by the interest in nVidia's Fermi line.

    2. Ken Hagan Gold badge

      @AC 20:32

      "There are efficiency features in the basic ARM architecture which x86 cannot incorporate without abandoning basic x86-compatibility (things like code predication for smaller faster code)."

      The jury's still out on predication, at least for general instructions. (It's clearly a winner for data movement, but x86 had CMOV about 20 years ago.) If AMD had wanted to add it as a general feature for x64, they could have defined a range of prefix bytes. (They probably still could. It's not like the x86 instruction has stood still in recent years.)

      ARM also has basic in-efficiency features like fixed-size instructions, which mean the transistors (and power) you saved on instruction decode are replaced by the transistors (and power) you need to build the larger instruction cache. Fortunately for ARM, ISA doesn't actually matter anymore.

    3. Giles Jones Gold badge

      Natural progression

      Not to mention you have to look at this as a natural progression of the computer, originally computers were huge and filled rooms and now you can hold one in your hand.

      As computers evolve they become more mobile and severed from the mains.

      Using ARM can double the battery life of a mobile device compared to x86.

      Can you imagine a world where you could only listen to music at home on a hifi? probably not, so why limit yourself to only using a computer in one place?

  24. Alan Denman

    30% TRUE.

    Obviously they want 30% of everything that ever goes on it.

    Users get the GPS and battery life and Apple get 30% of your Apple lifetime.

  25. Ilgaz

    Let me tell you a secret

    One of the reasons of Mac sales boom after Intel is the ability to run Windows, either natively boot (gamers) or virtual machine (engineers,business). Companies also liked the possibility to re-use existing code and easily optimise for a single arch.

    Windows 8 will run on ARM but don't forget Windows scene is a closed source heaven so you must wait for all companies to release and optimize for ARM. It is not Linux or BSD.

    1. Charlie Clark Silver badge
      Welcome

      Price matters as well

      As someone who switched to a Mac when the x86 ones came out I agree with you up to a point - the ability to run Windows in a virtual machine was important to me. However, more important was that the price premium of a MacBook against a comparable Windows notebook was around € 1000 less than in the Power PC world, still more expensive but "acceptably" so.

      Today I saw the first advert for an Android based notebook for less than € 100. Tiny and unergonomic though it may be this and other devices will start setting price expectations for notebooks with people happier to settle for Android based systems which remind them of their phones than they were with the Linux based netbooks. Apple and Microsoft will, at some point. have to respond to this market.

      I'm currently very happy with my 13" MacBook Pro, due for replacement in summer 2012. Will be interesting to see what happens between now and then.

      1. ElReg!comments!Pierre
        Linux

        Google sure helps Linux market share

        The Google marketting machine sure helped Linux expand out of its traditional behind-the-scene role, towards media-consumption devices (smartphones, tablets, and part of the netbook market), that certainly is a plus for Linux. However I fail to understand what you mean by "people happier to settle for Android based systems [...] they were with the Linux based netbooks."

  26. Jean-Luc
    Thumb Down

    What about Bootcamp? And games? And business software?

    What about the folks who went along with Apple notebooks because, at the end of the day, they could run their games natively under Windows? Or run their business apps? I know one Sharepoint consultant who was doing just that - he claims his MBP is the best Windows notebook he's ever had.

    What happens when you tell them that, no, no more Wintel? No, don't tell me Windows 8 will run on ARM, because that doesn't tell me that older programs, like Total War or Sharepoint-related crud, will run on ARM+Windows 8.

    Methink this rumor should have been reported on 35 days ago.

  27. Anonymous Coward
    Thumb Up

    "twice as fast per MHz "

    What the **** has "per MHz" got to do with it? Or "per anything"? Why not "per dollar" or "per watt"? Who (other than ubergeeks) needs all that speed anyway?

    "Core 2 vs. an iPhone 4"

    Now that's more like it, real products (albeit with not much detail). But again, exactly who needs all that speed anyway?

    "my quad core desktop computer with monitor on and CPU maxed out only needs as much power as a bright incandescent light bulb"

    You mean, of course, a now prohibited in the EU incandescent light bulb :) Now stop thinking geek and start thinking energy getting much more expensive and corporates wondering why they're using so much electricity on PCs sitting idle most of the time, and then using even more electricity for aircon to dispose of the heat from those PCs. Now stop thinking geek and wonder why the main constraint in *most* folks desktop PCs is not CPU but disk, and typically then only while Windows is running an AV scan. Not everyone spends a significant amount of time ripping BluRays or whatever, but for those that do, there's x86.

    My favourite low level benchmark is CoreMark. No ARM at the top end, almost all x86. So what. The available performance on ARM is more than enough for the vast majority of people the vast majority of the time. The rest can look elsewhere, especially once even MS have accepted that Windows/x86 is no longer the only option in the game.

  28. captainnick

    A vibrant Mac App store is key

    With over a year to go to this possible transition, gives Apple and it's ISVs enough time to recompile their apps to the ARM instruction set. I agree with those that have mentioned the importance of a Mac App store. Assuming Apple has trained it's users and it's ISVs that the Mac App store is the primary application distribution conduit then a transition from Intel to ARM for some of it's lower-performance cheaper laptops would be largely painless for it's users. I've written more about it here: http://nickager.com/blog/Mac-app-store-enables-future-ISA-switch/

    1. ElReg!comments!Pierre
      Stop

      it's...

      ... unlikely that I follow this link.

      First I don't like click-baiting in general,

      than you're writing style. Your loosing me with you're writing issues. Its worth then what I can take.

      Third, I think that you're wrong. The Apple App Store has nothing to do with a possible architecture switch. There was no such store for Apple's previous 5 architecture switches, and it didn't seem to be a problem. Plus, I don't see any major software vendor giving in to Apple's ridiculous App Store policies without a serious fight: think Adobe and Photoshop, for example. The vendors of niche / expensive software (Matworks with Matlab, Wolfram with Mathematica, etc...) are going to be even harder to bring onboard. An walled app store taking a 30% cut might be viable for small low-cost, high-volume apps, but not for low-volume high-value ones.

  29. FreeTard
    Thumb Up

    depends on what the os is doing

    Coz my lovely new shevaplug runs:

    tor, nfs, squid, iptables and dansguardian, yet it performs as well as the 64bit machine it replaced.

    I'll add more services when I see fit

    Loadavg:

    18:07:13 up 1 day, 31 min, 1 user, load average: 0.35, 0.21, 0.22

    For sure, I wouldn't be number cruching on it, but the point is, not every task NEEDS floating point ops.

    I'm not a Brit, but I definitely dig ARM, as battery life is critical for me, not floating point operations!

    Genuinely, who does such things on a laptop?

  30. uhuznaa

    iOS, not OS X

    I think Apple will come up with more and different devices running iOS and iOS apps instead of coming with a laptop running OS X on ARM. There's the Apple TV and I wouldn't be surprised if the next thing would be a 30" all-in-one Apple TV, also running iOS. And then a desktop tablet with 15" or so you prop up on your desk and use with a BT keyboard and trackpad. Running iOS, of course. Or an 11" MBA running iOS instead of OS X, which would be very much like an iPad with a keyboard.

  31. Henry Wertz 1 Gold badge

    Good idea

    "I did a quick survey of the make of machines in use in the cafe. Two out of the three times Apple MacBooks came out at over 50% share, the other time at 40% share. Total sample size is now probably about 50 machines, so statistically significant.

    What does this say?"

    Absolutely nothing. If I go to a redneck beer-drinking-and-fishing hangout, I can "prove" that 90% of the population drives Ford pickup trucks.

    -------------------

    That said, moving to an ARM would be a smart move. They have superior power use (something like 1 watt at 1.2ghz), the modern designs do have a pretty strong FPU, they have "Neon" instruction set and a DSP which do virtually all video decoding work (it can decode videos using about 30mhz of processing power, and that is without using any video decoding support of the video chip.) A lot of OSX is heavily multithreaded, it'd be real easy to stick 8 or more dual-core ARM CPUs into a notebook, and still use less than the power budget of the existing Intel processor.

    "If you recompile OSX for ARM64 and you keep the APIs identical, why would you need to emulate anything for additional software?"

    The CPU must be emulated, a program is not just a string of API calls. CPU emulation technology is good but this can still be a significant slowdown. Of course, you are right though, none of the other hardware has to be emulated a bit. (Although I haven't tried it...) qemu for Linux can do exactly this -- for instance, put some Linux for Intel libraries and binaries on an ARM, and run them on the ARM... most system calls are the same, and qemu can "convert" a few platform-specific ones (usually different for historical reasons).

    OSX supports "FAT" binaries (inherited from NextStep) -- these are binaries that contain, potentially, SPARC, Motorola 68K, PA-RISC, Motorola 68K, PowerPC, Intel, and (probably already) ARM code in a single binary (probably the Mach-O file format already has ARM support, since iOS is a mutant OSX).

    So, if Apple flubs it you'll end up with these Macs that "can" support native code, but actually are constantly running everything under emulation, not getting the performance it should. If they do moderately well, by the time the ARM machiunes ship they can at least make sure video playback support and Apple's own apps use NEON and any available DSP (ARM video acceleration for instance, very effective)) *and* supports whatever video acceleration the video chip supports. If they get it right, they will release ASAP XCode, gcc, etc. that support Intel + ARM "FAT" binaries, as app makers rebuild the apps they naturally gain ARM support*, so by the time the ARM machines ship a fair amount of software can be ARM native.

    *I've run Linux distros on PA-RISC, PowerPC (Mac) *and* an IBM POWER system, DEC Alpha, as well as Intel, I have a command-line only Debian install on my (ARM) phone too. None of these feel like a stripped down port, portable code is portable. I think most apps will be similar going from Intel OSX to ARM OSX, it'll just be a matter of hitting "build project" or whatever, not some troublesome porting process.

  32. Alan Denman

    Financial sense for Apple

    It's win win for Apple.

    First off, the SOC will cost $10 instead of $200.

    Secondly it certainly becomes a games type machine where the user has to buy everything from the APP store.

    That 30% is worth billions and your average user is not bothered that they are barred from writing or even choosing where to buy software

  33. Penti

    Huh

    Huh? Apple laptops are workstations and ARM 64-bit are still years away, maybe around 2014-15. Are you serious? Would people really run FCP, Avid etc on a ARM laptop in just years? Why would Apple benefit from another split of architectures? Would make no sense in keeping desktops and workstations on x86 if that happened, you could simply build bigger ARM-chips with more I/O, cache and memory bandwidth and more cores. But that would simply return them to the PowerPC-days of getting the tech developed and on par with Intel.

  34. jzedward

    End of the Hackintish?

    I can't help but think this strategy has 2 strands, one, lower cost processors for Apple, but keeping the premium pricing. Two, and maybe more importantly, a revitalisation of Apple's close tie-in between processor and hardware and death to the Intel hackintosh. Of course, the Hackintosh could be re-invented on ARM laptops, but this feels like a shift to iOS style laptops to me

  35. JollyBeeeben

    ARM and Intel co-exist?

    Surely it's possible that both chips may coexist within the same laptop. Imagine being able to hot swap into iOS to significantly extend battery life. If say for example you were currently editing a doc located in the 'Cloud' and you had 10% battery left.... you could switch over to iOS and continue editing using significantly less battery power.

  36. Mike123456

    This does not wash

    Why waste your time releasing an updated hardware to take advantage of Intel's new connection technology (Thunderbolt) to drop it in favour of a (yet another) CPU manufacturer.

    Not withstanding the fact noted by others that the OS will need recompiling, and will lead to versioning compatabilities between the desktop and laptop and iOS versions of applications, why waste money tooling up for thunderbolt for a 4 (ish) week run of new hardware.

    this surely, is a sham. As much I now questionApple for their recent (last 4 years) decisions in how to conduct business, I don't believe this is a serious story.

  37. Matthew 17

    Can't see it happening for anything other than an Air

    For a sort of iPad with built in keyboard device, somewhere between an existing Macbook Air and an iPad there's possibly a market for this but the idea that all their laptops would follow suit is daft. I have a Mac Pro and a Macbook Pro, being able to work on both and swap software between them is essential (I use Logic mostly). Having different architectures just wouldn't be practical (a lot of music plugins allow you to install on multiple machines, you just swap the USB dongle to the one you're actually using at the time) so they'd have to shift the whole lot to one, where you'd suffer with the loss in processor performance and the return of Universal binaries, sure you could have a bagillion cores but OSX and Mac's have always been terrible at utilising multi-core environments efficiently (I think the only program I have that does it well is Handbrake). There's a lot of software that became available for OSX when Mac's adopted Intel thus making them more popular, Apple will know this.

  38. Anonymous Coward
    Boffin

    @bazza

    > 7th May 2011 07:59 GMT

    >

    > The ARM core, even today, is still about 32,000 transistors

    Do you have a referencer for that?

    (Not trying to snark. I was seriously trying to look up transistor counts and similar stats for modern ARM processors and drawing a blank. Google searches are overwhelmed with references to "Tri-Gate Transistors" and/or "2D" in contrast to that; failed to find on arm.com or wikipedia; etc.)

This topic is closed for new posts.

Other stories you might like