back to article Intel plans to cut products — we guess where they’ll happen

With a major downturn in revenue and profitability over the past six months, Intel has some tough decisions ahead as it seeks to make billions of dollars in cuts while the beleaguered semiconductor giant tries to enact its grand comeback plan. As announced last week, the American chipmaker plans to reduce spending by $3 …

  1. captain veg Silver badge

    Don't get us wrong. Intel's NUC mini PCs are cool, little feats of engineering

    NUC's are great. The engineering is first class. It's a shame they are so ugly.

    Alternatives from the likes of ASRock and Gigabyte look far nicer on a desk or shelf and seem to work much the same. This is definitely an area where Intel can revert to supplying the silicon and letting the Taiwanese do the packaging.

    -A.

    1. Lordrobot

      Re: Don't get us wrong. Intel's NUC mini PCs are cool, little feats of engineering

      Welcome to 2013... Please explain to me why INTEL is in this business when its core business is selling chips to computer board makers. If you were a mini maker would you use a competitor's CPU? Intel has the same plan with FAB... They actually expect QUALCOMM to bring their proprietary designs to INTEL for fab. Do you see any problem with this that also parallels the NUC... or should I say Apple Mini concept?

    2. teknopaul

      Re: Don't get us wrong. Intel's NUC mini PCs are cool, little feats of engineering

      I have both types, because they are tiny they are tucked away out of sight, so it matters little what they look like.

      Agree the Intel ones are not pretty.

    3. Oglethorpe

      Re: Don't get us wrong. Intel's NUC mini PCs are cool, little feats of engineering

      I don't mind the aesthetics of NUCs and, even if I did, I'd still prefer them over the competition because of how nicely they're put together. The alternatives feel creaky and fragile.

      1. llaryllama

        Re: Don't get us wrong. Intel's NUC mini PCs are cool, little feats of engineering

        I use Asus mini PCs in a light industrial environment, this particular model comes with a 7 core AMD processor and a five year warranty. They have an excellent mounting system so you can slap them on the back of a touchscreen monitor.

    4. Anonymous Coward
      Anonymous Coward

      Re: Don't get us wrong. Intel's NUC mini PCs are cool, little feats of engineering

      Don't forget overpriced as well as ugly.

    5. Steve Davies 3 Silver badge

      Re: Don't get us wrong. Intel's NUC mini PCs are cool, little feats of engineering

      I have one NUC and a Nipogi mini PC. The latter beats the Intel offering into a cocked hat. It uses an AMD CPU and runs my DB server 2-3 times as fast as the NUC.

      It seems to me that Intel has some great ideas and products and then neglects them. The NUC is IMHO, one of them. If there had been more updates esp wrt the CPU they might have sold 20-40 million instead of just 10.

      The mini PC once unleashed after removing windows is more than up to the job that I need it to do.

      I've just ordered two more.

      Intel does have a lot of cruft in its catalogue. I'm waiting for them to pull an Elon and fire half the workforce. :)

  2. martinusher Silver badge

    PCs are a blind alley

    The x86 architecture is a huge cash cow but it was never a particularly good architecture. It is a marvel of life extension, taking a 'meh' architecture from the 1970s and stretching it far beyond where it really should have gone. It will probably continue to be a major earner for years to come but it would be smart to have an alternative waiting in the wings. This is an area where Intel has proved very poor at; its not that the company can't innovate but over the years the bean counters have continually shut down forward looking initiatives when quarterly sales fail to meet targets. Like other large corporations it then uses acquisitions to make good the shortfall, sometimes paying way over the odds for something it deems critical. (I witnessed this first hand with WiFi -- throwing around cash for commodity technology. They eventually got a viable -- even good -- product but they spent enormous amounts of money to get there while competitors seemed to get there around the same time frame using readily available off-the-shelf commodity technology.)

    Sometimes you can be too successful.

    (Incidentally, sometimes it pays to not lay off people. This is a normal corporate reaction to the bean counters but it can be shortsighted. Every engineer you employ is one less for the competition and its probably not wise to release a bunch of engineers into the marketplace that know your company, its work and its weak and strong points into the job market. I know that corporate types tend to think of people as interchangeable widgets, you can always get a bunch more when you need them, but the reality is very different.)

    1. Flocke Kroes Silver badge

      Re: alternative architectures

      Intel did try other architectures but not in a way that stood a chance of commercial success.

      For a start: Intel iAPX 432 - very ambitious for early '80s. Next up, Itanic: designed to require transistors than anyone else could put on a chip. They did some high end ARM CPUs at the beginning of the millennium. All high end, comparatively low volume devices. Dividing huge NRE costs by the low volume made the chips both expensive and low margin. Even worse, they had to compete for FAB resources against high very high margin x86 chips.

      Successful new architectures all took off when they were low performance compared to x86. Ambitious dreams aged into the low performance range, got built with cheap end of life process technology and sold off at low margin in bulk for embedded systems. The huge numbers and long stable production runs divided the NRE costs down sufficiently to finance the next generation of slightly higher performance low end chips.

      1. captain veg Silver badge

        Re: alternative architectures

        I seem to remember that i960 was supposed to be the future, once.

        -A.

        1. hammarbtyp

          Re: alternative architectures

          Loved the i960...so many interrupt lines compared to the x86, whose interrupt handling is a bit of a hack. Great imbedded processor, but I guess the market was not big enough

        2. Michael Wojcik Silver badge

          Re: alternative architectures

          The problem with i860 and i960 was they were Just Another RISC CPU, at a time when there were relatively popular established competitors (MIPS, SPARC) and interesting newcomers (Alpha, PPC, PA-RISC).

          i860 looked good compared to the '486, when they were contemporaries, if you didn't need x86 compatibility. And IIRC it had some success in embedded applications. But it was tough to argue for it over the RISC competition.

          The iAPX 432, on the other hand, was exciting, the only commercial capability CPU available at the time besides the System/38, I think. But it was too ambitious and even in the early '80s Intel couldn't compete with its own x86 architecture.

      2. Version 1.0 Silver badge
        Go

        Re: alternative architectures

        x86 was high performance but the prior 8080 and Z80 versions were amazingly easy to setup with all sorts of interfaces with the CPU's, making the basic design and construction of devices very reliable. Since then the whole environment has changed, most of it good but there are some issues so dumping items that are difficult to use, or not used much, is just a way to move forward.

      3. Michael Strorm Silver badge

        Re: alternative architectures

        > Itanic: designed to require [more?] transistors than anyone else could put on a chip.

        I'm not sure that was its main problem, though?

        As far as I'm aware, one of the major issues with Itanium was its reliance on the assumption that it could pass the responsibility for instruction scheduling (which on conventional processors is calculated in hardware at runtime) off onto the compiler instead. Only it turned out that this was hugely more complicated than had been assumed.

        I believe that this is what Donald Knuth was referring to when he said "The Itanium approach...was supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write."

        (I don't know the details, but I'm guessing that- in practice- instruction scheduling turned out to be way more dynamic and reliant upon knowledge of the current state of the system than predicted and thus much harder to predict and statically allocate and allow for in advance, i.e. at compile-time rather than runtime).

        Anyway, I'm very far from an expert in this field- so feel free to correct or expand upon any of the above.

        1. eldakka

          Re: alternative architectures

          (I don't know the details, but I'm guessing that- in practice- instruction scheduling turned out to be way more dynamic and reliant upon knowledge of the current state of the system than predicted and thus much harder to predict and statically allocate and allow for in advance, i.e. at compile-time rather than runtime).
          It also made the compilers themselves incredibly complex, too complex to properly debug.

          Another issue was AMD. Specifically, the AMD64 extensions that brought 64-bit to x86. Intel had been banking on people who needed 64bit address spaces and other 64bit goodness would migrate to Itanic because, well, they didn't have a choice. x86 was 32-bit and Intel was not interested in going to 64bit on x86 because that would cannibalise sales of Itanic and destroy the use-case for Itanic - 64bit compute.

          So once AMD did introduce their 64-bit extensions, workloads that required 64-bit moved to cheaper, better understood, backwards-compatible x86 with AMD64 extensions that didn't need these exceedingly complex compilers for 'native' speeds or any sort of performance-destroying emulation to run legacy 32-bit x86 apps. Either that or the also pre-existing better known and understood 'big iron' RISCy 64-bit alternatives, IBM, Sun, etc.

          When Intel licensed AMD's AMD64 extensions and relabelled them x86_64, I think everyone saw the writing on the wall for Itanic. And over the long term the other 64-bit competitors, Sun etc., IBM with Power is really the only non-x86 64-bit competitor still around. There are some niche products still out there, but those niches are shrinking.

          1. BOFH in Training

            Re: alternative architectures

            IBM with Power is really the only non-x86 64-bit competitor still around. --- you forgot ARM64, not to mention RISC64.

            1. eldakka

              Re: alternative architectures

              Neither of those latter are currently competitors to x86 or Power in the server space - which is what we are talking about here in reference to Itanic.

              They hope to be - and maybe oneday will be - future competitors. But today? They are interesting technology demonstrations, neither AMD, Intel or IBM are losing sales to them today.

        2. JacobZ

          Re: alternative architectures

          Another reason for the failure of Itanium is that while it was being developed, x86 processors got very good indeed at optimizing instruction throughput with out-of-order execution, speculative branching, and other clever "tricks". Those same optimizations turn out to be extremely difficult to achieve with the Itanium architecture. This in large part is what Knuth's comment was getting at.

          This reply on StackOverflow gets into more of the details: https://stackoverflow.com/questions/1011760/what-are-the-technical-reasons-behind-the-itanium-fiasco-if-any

        3. Michael Wojcik Silver badge

          Re: alternative architectures

          Itanium is also deliberately less tolerant of some common programming errors which are often irrelevant in practice. For example, it has a "not a value" trap representation for integer registers.

          I once spent quite a bit of time debugging an intermittent SIGILL in an HP-UX program which turned out to be due to a missing declaration for a function in a library. The function was defined with void return type; with no declaration in scope in the caller, K&R rules applied, including implicit int return type. The compiler inserted a move from the return-code register to some working area (don't recall if it was another register or a stack location) on return from this call, but since the called function had void return type, it didn't put a value in that source register. If the source register happened to contain not-a-value, the move would generate a CPU trap, which under HP-UX became SIGILL.

          So sloppy C code (i.e. most of it) could run foul of various sorts of intermittent, hard-to-find errors on Itanium platforms. That's in addition to the usual type-punning problems you see with C on I32LP64 platforms. On top of the performance issues, this made working with Itanium a real pain.

    2. Michael Strorm Silver badge

      "x86" isn't really x86 any more and hasn't been for a long time...

      > [x86] was never a particularly good architecture. It is a marvel of life extension, taking a 'meh' architecture from the 1970s and stretching it far beyond where it really should have gone.

      I suspect you're already aware of this, but for the benefit of those who aren't...

      A large part of the reason that Intel has apparently been able to keep developing x86 for so long is that they actually abandoned the original native x86-derived internal architecture decades ago!

      Since the launch of the Pentium Pro (and its mainstream counterpart, the Pentium II) over 25 years back, Intel's "x86" processors have been essentially a compatibility layer around a broadly RISC-like* core, converting x86 instructions into microcode/micro-operations that can then be reorganised and handled in whatever manner is deemed appropriate.

      * Although commonly described this way, I should acknowledge that I've had some people dispute how RISC-like the design actually is.** The important thing here, regardless, is that it's not x86-derived.

      ** Of course, so long as the external "API" retains x86/x64 compatibility and the internals opaque, they could- and possibly have- significantly or completely changed the internal implementation/architecture over the years anyway.

      1. Peter Gathercole Silver badge

        Re: "x86" isn't really x86 any more and hasn't been for a long time...

        I think you could say the same about the API on any long-lived processor family.

        The other example you quote as successful in the server space, IBM Power, has had so many internal design changes under the covers that the underlying silicon bears no resemblance to the original RIOS design in the late 1980's (yes, I'm aware the RS/6000 launched in 1990, but it was a late '80s design, and had been running internal to IBM for a year or two - I was working at IBM before the launch).

        And compare to other processor designs. ARM architectural licensees literally have a license to re-implement the architecture using different logic. The original PDP11 was built in TTL, before moving to AMD bit-sliced processors and then VSLI. Each process move would have necessitated a full re-design, but the instruction set remained broadly the same. Ditto the VAX and HP's Precision Architecture. And compare the s360 with current z16 processors (I know, I'm stretching a processor 'family' to it's extremes here).

        A processor family is really just the API.

  3. Lordrobot

    Intel Circling the drian... When Monopolies end

    Do people really talk like this? Snowballing Smooze...

    "We remain committed to optimizing our value creation efforts through portfolio honing; reallocation of resources to higher returns, higher-growth businesses; M&A; and, where applicable, divestitures,"

    Instead of honing, and reallocation etc, Intel should make better products and stop being the darling of US SANCTIONS which are destroying INTEL.

    I think it is safe to say that Intel has missed every major trend in the last 20 years. They missed the handheld revolution. They blew it on fab at 10nm, blew it on 5G. AMD is bashing Intel's brains out. The 84 billion in stock buybacks. And now chasing the pipe dream of turning Columbus Ohio into the Meca of FAB... a low margin, manpower-intensive business, not suitable in any way to the Murican workforce. It is a cascade of rubbish, a dying business. This new born again CEO has no feel for the business units at all. Every business unit is playing catchup and EARNINGS ARE THE BIG SURPRISE that every one unit is underperforming. Soon Qualcomm is going to be beating Intel's brains out in CPUs. And Qualcomm is quite efficient.

    Does anyone at INTEL have any clue about where markets are going? How could they miss the handheld market? How is that possible? All they do now is look at other companies and say, we should do that. So they chase NVIDIA in GPUs trying to play catchup but never getting there. Then they look at TSMC and say... "hey we should go into FAB AFTER THEY HAVE FAILED AT IT!" So they go back into playing catchup but they will never get to the TSMC level just as they never got to the NVIDIA level.

    Intel should look at Walmart and Amazon. Both have thriving grocery businesses.... Intel should go into the grocery business. Then they can "HONE" the portfolio of fresh vegetables and reallocate higher returns for baked goods.

    Do people really talk like that?

    I see one poster says... "Don't fire the engineers... they may go work someplace else." Like where? When your engineers miss the handheld revolution, they should all be fired as utterly worthless.

    1. teknopaul

      Re: Intel Circling the drian... When Monopolies end

      Yeah you wonder if any one at Intel would think like the Reg article.

      Probably the decision making process will be more like...

      First in first out, unless the product manager plays golf with the C level manglement or its on the up slope of the hype curve.

      Then pay some hipkid to write marketing blurb to explain the decision.

      It's amazing how big corps can be suckered by hype and fail to do the numbers, even when doing the numbers is literally just a case of asking someone in the same company.

  4. man_iii

    NUCs

    When I prefer NUCs for running VMware and any other Hypervisor solutions in my personal "lab" over anything else tells me Intel are truly dropping the ball when it comes to business. I guess I need to rethink buying couple of R-pi boards and some other mini PC brand and really start building my own management io and solutions. So far I ignored since NUCs seemed likely to stay but Intel is pushing me towards other vendors.

    1. doublelayer Silver badge

      Re: NUCs

      Intel is pushing you to other vendors exactly how? The article suggested cutting NUCs, but Intel didn't say anything of the kind, so if it's about wanting future supplies, you have no reason to think you won't be able to get more. In addition, the benefit of using a NUC-style device using X86 is that, if they did cancel them tomorrow, you only have ten other companies making small devices that can boot the same OSes and run the same programs.

      You also suggest that you don't have much experience with alternatives. For example, considering Raspberry Pis for, in your words, "running VMware and any other Hypervisor solutions". That's not a workload for which the Pi's going to shine. And that's if I'm charitable and assume that the VMs you want are light on resources and don't have any CPU emulation required.

  5. Vikingforties

    Some gone already

    As far as I can tell, one of Intel's more random ventures has gone already. Their Geospatial group appears to have joined the choir invisible.

    Maybe it's just resting.

  6. Roj Blake Silver badge

    FPGA

    I can see them getting rid of the FPGA business. Like Optane, it's something that's not really taken off.

    1. RichardBarrell

      Re: FPGA

      Plausible. They might not want to specifically because really big FPGAs are extremely useful for doing chip design, since they can run prototypes much faster than a simulator.

  7. JacobZ

    Software

    Anything software will be on the block, especially if it can't show that it promotes sales of Intel - and only Intel - silicon. For example DAOS; technically it's open source so somebody else could pick it up, but Intel will stop staffing it.

    Intel has little patience and even less success with software at the best of times, and these are very much not the best of times.

  8. Jon 37

    Split the company

    It seems inevitable that Intel will, eventually, fully split it's Foundry arm from the rest of the company.

    Whether that be a demerger or a sale of the Foundry business.

    That will allow a fabless Intel to avoid having the huge capital costs of the fabs. The IDF Foundry company can take that responsibility, and can sink or swim on its own merits.

    This will allow Intel to save most of the company from the right mess the Foundry division has got into.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like