I wonder what intel would do if both M$ and Apple went ARM first on their primary os's.
Where is Intel's ARM competitor?
Where is Intel in the mobile phone or tablet space?
Where is intel in discreet gpu?
Intel used the 40th anniversary of its x86 architecture to warn chip rivals to mind their step when emulating its instruction set. The reigning desktop and data center server CPU king (AMD, you know it's true) said it would not hesitate to sic its lawyers on any competitor whose emulation tools got too close to copying the …
I believe this is the right direction to think in.
Intel isn't trying to guarantee security in the mobile device market. That ship sailed. In fact, with the advent of WebAssembly, it is likely x86 or ARM will have little or no real impact now. Intel's real problem with mobile platforms like Android was the massive amount of native code written for ARM that wouldn't run unmodified on x86. With WebAssembly that will change.
Intel is more concerned that with Microsoft actively implementing the subsystem required to thunk between emulated x86 and ARM system libraries, it will be possible now to run Windows x86 code unmodified on ARM... or anything else really.
That means that there is nothing stopping desktop and server shipping with the same code as well. This does concern Intel. If Microsoft wants to do this, they will have to license the x86 (more specifically modern SIMD) IP. Intel will almost certainly agree to do this, but it will be expensive since it could theoretically have very widespread impact on Intel chip sales.
Of course, Apple who has proven with Rosetta that this technology work could have moved to ARM years ago. They probably didn't because they decided instead to focus on cross architecture binaries via LLVM to avoid emulating x86. Apple will eventually be able to transition with almost no effort because all code on Mac is compiled cross platform... except hand coded assembly. Microsoft hasn't been as successful with .NET, but recent C++ compilers from Microsoft are going that way as well. The difference is that Microsoft never had the control over how software is made for Windows as Apple has had for Mac or iOS.
Intel have competitors for ARM but they are currently double the price. And x86 chips at 2-1000x the price depending on the performance chosen.
Intel doesn't have a cost competitive competitor to ARM because the market hasn't forced them to provide one yet And I'm not sure Intel (at least as we know it) could survive on ARM margins... Those $5B+ fabs are hard to justify if you need them to make 50 billion cpus to break even...
"ARM doesn't actually manufacture its chips, it simple patents them and licenses the design.. if Intel followed this model, it wouldn't need them $5 billion fabs!"
The overall cost of producing an ARM-based chip tends to be in the US$1-US$20 with US$7-US$15 being typical for smart devices on a per SOC basis including necessary licencing for ARM/third parties, fab costs and testing/QC. The fab prices are in there, but at least one or two generations of cost/technology behind Intel and utilised at a much higher level (customers are stacked up and designs are delayed if there are problems).
Intels matching costs running everything in-house are in the US$15-US$50 range.
Be-jesus! Intel blog shows a picture stating that they have added more than 3.500 instructions to x86! Can that really be true? How many transistors are needed for that?
That is why one RISC with 10 million transistors is faster than a CISC with 50 million transistors (10 million x86 transistors are needed just to figure out which x86 instruction you just read and where the next instructions starts, 20 million transistors are never used but needed for backwards compatibility. 20 million are used for cache, etc).
Some would also suggest that RISC suffers similar problems when optimized for transistor depth where highly retiring operations are concerned. Modern CISC processors have relatively streamlined execution units which is what consume most of their transistors... as with RISC. However, RISC which has to increase instruction word size regularly to expand functionality suffer the burden of either requiring more instructions for the same operation as CISC, or they have a higher cost of data fetching which result in longer pipelines that can suffer greater probability of cache miss. Since 2nd level cache and above as well as DDR generally depend on burst for fetches, RISC with narrow instruction words can be a problem. Also consider the pipeline optimization of RISC instructions which may have branch conditions on every instruction can be highly problematic for memory operations.
Almost all modern CPUs implement legacy instructions (such as 16-bit operations) in microcode which executes similar to a JIT compiler that compiles instructions in software.
Most modern transistors on CPUs are spent on operations such as virtual memory, fast cache access and cache coherency.
This sounds a lot like Microsoft's attempts to spread FUD about how many patents protected FAT. Patents concerning how the ISA is implemented in hardware cannot possibly be relevant to a software emulation. Patents concerned with how the ISA was cleverly designed to enable an optimal implementation may not be relevant either if some court decides (as with the Java lawsuit) that you can't protect an interface. The risk for Intel is that this strategy back-fires and emulation is found in court to be entirely legal.
When Microsoft introduces the capability, Intel sues. Case spends a few years in trials and appeals, meanwhile adoption of Windows on ARM is very limited because of uncertainty about its headline capability of emulating x86 Win32 apps.
Even if Intel loses the court case eventually they get several more years of x86 being the only alternative for running Windows, and pocket billions as a result. If they end up having to pay Microsoft's court costs its chickenfeed compared the many millions of additional x86 CPUs they'll sell.
The real loser in all this would be Qualcomm, who would have the Snapdragon 835 ready to go for PC OEMs to install in low end Windows/ARM PCs, but have few takers. And potentially Apple, if they are planning to migrate the Mac to their ARM SoCs, without losing the ability for their customers to run Windows apps (it is unclear whether they want to do this, or whether losing the ability to run Windows apps is what has prevented it so far, but it is possible)
I can still see them being slapped early which would then force them into playing defense and pushing it through appeals. Each setback is going to greatly embolden folks to give it a try. Before you know it everyone from Nvidia to Sunway to Power has Windows running via emulation. It might not make sense for Sunway or Power but I have to believe Nvidia would love to have Windows running on their Tegra series of ARM SOCs and I'd think the graphics side would be a snap.
Why would they be 'slapped early'. Intel's patents definitely hold for actual chip implementations, the only question is whether they also hold for software emulation of the chip. I wouldn't be surprised if Intel ends up winning the case, but if they don't it isn't going to be something that is shot down quickly.
Why would they be 'slapped early'
If I could read judges minds, I'd have retired long ago. Having not read the patents myself, I can see it happening simply because it seems to me that very often in cases like this the judges are a fickle and non-technical lot and at times it's largely a roll of the dice with the decision hinging on which lawyer is more eloquent and convincing that particular day and less the actual bare facts. Hell, if it was solely based on facts, the judge could just read the patent in chambers, compare it to the implementation, and thus save everyone a bunch of money but that never happens. Most times I'd be rather surprised to find the judge ever actually read the patent word for word, especially the long winded ones that run on for pages on end.
How the ISA was cleverly designed...
To make it easy to translate 8080 assembly code from the 1970s to run on it...
With the 16-bit operations and 1MB addressing bodged onto the top (8086)...
And 16MB addressing bodged on top (80286)
And 32 bit operations and more address space bodged on top (80386)
@Simon Harris - "How the ISA was cleverly designed..."
Intel's heart was in the right place when they made many of their ISA and chip decisions. They just didn't execute them very well.
Imagine if segments on the 808x were page (256B) aligned instead of paragraph (16B) aligned. And had they released a 80186 core in a 8086 package. And had they released a 80286SX that made the MMU an optional external chip (like the MC68451 and '851). It would have made life prior to the 80836 cheaper, faster, and a whole lot less miserable (no need for EMS or XMS).
For all their past mistakes, the 80386 did resolve most issues. Flat memory, 32 bit registers, more orthogonal instruction set, V86 mode, paging, real/prot mode switching, etc...
It just sucks that neither Microsoft nor Digital Research released a proper 32-bit successor to DOS at the time. Imagine a lightweight text-mode version of Windows 95 back in '86. Instead, you had to muck with DOS extenders or go down the expensive path of a GUI-based OS, like OS/2 or Win 2.x/3.x. Yuk.
This post has been deleted by its author
In 1986 most people lacked the RAM to run a real multitasking OS. Many PCs had barely the RAM to run a real mode applications, or the early GUI systems, usually not more than 1-2MB. 4MB would have become a standard only around 1994.
While multitasking, forbidding direct hardware access, would have required most applications to be rewritten deeply - and that was exactly what many software developers back then feared, when real mode application still sold like hotcakes - often at prices of some hundred dollars (people forgot how software was expensive then). Many failed the transition to Windows (and OS/2) exactly for that reason.
There were attempts to write some "DOS better than DOS", but all of them failed because lack of interest, but some DOS extenders. GUI offered advantages that were too big to be ignored, and still the obstacles to rewriting applications meant the failure of many DOS companies.
"For all their past mistakes, the 80386 did resolve most issues. Flat memory, 32 bit registers, more orthogonal instruction set"
In other words, the issues that Motorola got more-or-less right in the first place with the 68000. While I've spent almost all my professional life dealing with Intel based systems, I've always thought the way Motorola broke with the 8-bit architecture* when producing a more advanced processor was a better approach.
Certainly, some of your suggestions (e.g. 256 byte aligned segments) would have given a 16MByte address range (comparable with the the 68000) and 80186 instructions would have been nice to have from the start (if I remember correctly, there were some 80186 'almost PC compatibles' around - the 80186 built in peripherals and associated interrupt map didn't quite match those used in a standard PC), but going from the 8080 to pentium class and beyond CPUs seemed more like a fade-in rather than a step-change, with current generations carrying all the baggage of previous ones.
In a sense, keeping all the previous baggage makes people (including me) lazy/stingy (you decide!) - I was still using software originally written for an MSDOS 3.1 8086 machine when I had a 486 machine with Windows98SE.
*admittedly not entirely, as some instructions were designed to allow easy use of their 8-bit IO devices.
Flat memory will have to go away because it is insecure. It's simpler, but when every bit is readable/writable/executable you have a security issue. Intel segments had security access controls - i.e. a segment could be executable, but not readable and of course not writable. While data segments could be not executable at all, and read-only. Try to implement ROP in such an environment... many other code injection techniques would fail.
AMD was very shortsighted when it throw them away in x64. Intel was ahead of times, in 1982.
If in the future we want OSes secure from the ground up - CPU will need to re-introduce way to protect memory beyond simple NX bits for pages, and give proper access rights to every piece of memory, depending on what it is used for.
Having multiple execution rings was also a good idea - privilege escalations would be much harder if not everything below user mode was running in the highest privilege mode.
You have a lot of great points. I always considered the 64KB page to be a smart decision when considering backwards compatibility with 8085. It also worked really well for paging magic on EMS which was not much more difficult to manage than normal x86 segment paging. XMS was tricky as heck and DOS extenders were really only a problem because compiler tech seemed locked into Ohar Lap and others $500+ solutions at the time.
I don't know if you recall Intel's iRMX which was a pretty cool (though insanely expensive) 32-bit DOS for lack of a better term. It even provided real-time extensions which were really useful until we learned that real-time absolutely sucks for anything other than machine control.
Also, DOS was powerful because it was a standardized 16-bit API extension to the IBM BIOS. A 32-bit DOS would have been amazingly difficult as it would have required all software to be rewritten and since nearly everything was already designed to use paged memory. In addition, since most DOS software avoided using Int21h for display functions (favoring Int10h or direct memory access) and many DOS programs used Int13h directly, it would have been very difficult to implement a full replacement for DOS in 32-bit.
Remember; on 286 and sometimes on 386, entering protected mode was easy, but switching back out was extremely difficult as it generally required a simulated boot strap. That means to access 16-bit APIs from 32-bit may not have been possible. They would have had to be rewritten. For most basic I/O functions that wouldn't be problematic, but specifically in the cases of non-ATA (or MFM/RLL) storage devices, the API was provided by vendor BIOSes that reimplemented Int13h. So, in order to make them work, device drivers would not have been optional.
In truth, the expensive 32-bit windowed operating systems with a clear differentiation between processes and system-call oriented cross process communication APIs based on C structures made more sense. In addition, RAM was still expensive with most systems still having 2MB of RAM or less, page exception handling and virtual memory made A LOT of sense as developers had access to as much memory as they needed (even if it was page swapped virtual memory).
I think in truth, most problems we encountered was related to a > $100 price tag. Microsoft always pushed technology by making their tech more accessible to us. There were MANY superior technologies, but Microsoft always delivered at price points we could afford.
Oh... and don't forget to blame Borland. They probably were the biggest driving factor behind the success of DOS and Windows. By shipping full IDEs with project file support and integrated debuggers (don't forget second CRT support) and integration with assembler (inline or TASM) and affordable profilers (I lived inside of Turbo Profiler for ages). Operating system success has almost always been directly tied to accessibility of cheap, good and easy to use development tools. OS/2 totally failed because even though Warp 3.0 was cheap, no one could afford a compiler and SDK.
"Yeah, screw those guys for working to develop a file system and actually expecting other companies to pay to use it."
If it was for internal (e.g. Windows) use then sure. When it's forced into other standards (looking at you, SDXC) and makes an entire ecosystem outside of them dependent on it, then yes absolutely screw them.
A filesystem that is aimed primarily at exchanging data between devices shouldn't be held ransom to a single company and should be openly documented and standardised. Look at UDF for example - it's a shame that doesn't see wider adoption.
All this while they shout out about how they love Linux and are encouraging interoperability between devices. It's hard to take that message seriously while they still play games like this.
most/much of that portfolio is around SIMD
SIMD is by no means an Intel/x86 specific invention. There are similar instruction sets in other architectures - PPC, MIPS and most importantly - Neon in ARM. I have some doubts that Intel will be successful trying to press anything vs ARM in SIMD land.
The patents cover SSE and AVX specifically, and are the reason why AMD introduced their '3DNow!' instructions instead of SSE - Intel didn't grant them a patent license for SSE. When AMD introduced their 64 bit extension, obviously Intel needed access to that, so they signed a full cross license which is why AMD was able to support Intel's SIMD implementations of SSE and AVX and drop 3DNow!
Intel had started work on the 64bit extension to x86 when AMD was talking with Microsoft about x86-64 support. Microsoft made it clear to Intel, MS would only support 1 x86-64 version and AMD was going to be first to market and win out. Intel was already in conflict with AMD over newer extensions (SSE etc) and threats of anti-trust lawsuits in the EU. Intel chose to take the easy road and cut a deal with AMD for cross-licensing between them. With a few minor changes to the core, changes to the decoders and microcode Intel got all but a few instructions completely compatible (I remember their was a early bug where Intel CPU didn't quiet match the AMD behaviour). Intel copy & pasted AMD ISA, find&replace AMD64 with IM64T and Intel regained market domination starting with the Core 2 series.
Actually Intel already had 64 bit support in shipping P4 CPUs when AMD announced Opteron/Athlon 64 and got Microsoft's buy-in.
Intel wanted to push everyone to Itanium to get 64 bits - first on servers, then workstations, eventually consumer PCs/laptops down the road, since it was fully patented and would be a legal monopoly with no pesky AMD nibbling at their heels. They had 64 bit support ready to go in the P4 in case they ran into issues, but the one thing they didn't foresee was Microsoft supporting an AMD developed 64 bit implementation. Because Microsoft said they'd only support one, it was too late and Intel had to scramble to implement AMD's version of 64 bits. Because Itanium didn't have that push behind it any longer, Intel's investment in it dried up and it is currently on its last version (contractual requirement with HP, who co-developed it with them)
@Jack of Shadows - "unfortunately, most/much of that portfolio is around SIMD and right there is the basis of their threat."
This is software emulation, not a new chip. Actually it's probably not even real emulation, but rather binary translation. That is, it would be cross-compiling x86 binary instructions to corresponding ARM binary instructions.
ARM has its own SIMD. If the emulator can translate x86 SIMD binary instructions directly to corresponding ARM SIMD binary instructions, then there's no problem as the ARM chip is implementing it directly already.
The only way Intel's patents can mean anything is if their x86 chip is doing something related to SIMD that doesn't exist in ARM. And it if is doing that, then the ARM chip has to do it using normal non-SIMD instructions anyway. It's pretty easy to imagine the binary translator seeing an x86 SIMD instruction that doesn't exist in ARM, and just calling a library function that accomplishes the same thing using conventional instructions. I can't see Intel's patents coming into play in that case.
I've been doing a bunch of work using SIMD recently, and what I can say about Intel's SIMD instruction set is that there may be a lot of instructions but that's mainly because there are multiple different overlapping sets of instructions that do very similar things. They just kept adding new instructions that were variations on the old ones while also retaining the older ones, resulting in a huge, tangled, horrifying mess of legacy instructions which they can't get rid of because some legacy software out there might use it.
Off the top of my head, the only SIMD feature that I have run across so far that Intel has a unique patent on is a load instruction which has the ability to automatically SIMD align arrays which were not aligned in RAM. It sounds great, but it's not really as big a deal as you might think, since good practice would have you simply align the arrays to begin with when you declare them. It's mostly of use to library writers who want to be able to handle non-aligned as well as aligned arrays for some reason. You take a performance hit for that flexibility however.
I suspect that software publishers, including Microsoft, will offer native ARM ports for the most popular applications rendering this moot so far as they're concerned.
17 (in August) year old tech from AMD publishing the 64 bit spec. No idea when AMD filed the patents (current US law is 20 years from filing), but they can't last much longer.
While "patent troll" might be specific to "non-practicing entities" "patent abuse" is certainly part and parcel of large tech firms. Other than the insanity of "shield patents" (good God, why should these be necessary), I can't see a company filing for a patent for a non-troll purpose.
As the battle between Samsung and Apple in California shows, while East Texas may be the most patent holder friendly it isn't like the other districts will immediately quash any such lawsuits. We might be reading about Intel vs Microsoft with Judge Koh presiding in a couple years...
It is true that legacy x86 software is one of the things that makes Windows so attractive, compared to Linux or the Macintosh.
But Microsoft also wants to encourage people to sell applications through the Windows Store, and to write them in managed code for the new post-Windows 8 interface formerly known as Metro.
So if Intel manages to hobble x86 emulation on Windows for ARM (cases concerning z/Architecture emulation on the Itanium come to mind as a precedent) this may not be a total disaster for Microsoft.
If MS thought they could get by without the emulation, they would not have put much time, effort, and cash into developing it. They're hoping eventually to have a lot of UWP apps that will allow them to deprecate Win32, but enough of those apps don't exist now, and a device that can only run UWP is presently dead in the water (like Windows phone). Unless and until that UWP library exists, that emulation is going to be the only thing that makes a Windows ARM device usable for the people who need more than what the few existing UWP apps can do (nearly everyone, in other words).
"It is true that legacy x86 software is one of the things that makes Windows so attractive, compared to Linux or the Macintosh."
What are you talking about - Linux can run legacy software including that written for Unix and it doesn't have compatibility problems between versions.
Windows ties you into versions very tightly - this is not attractive to anyone except Microsoft.
Please tell us more about "attractive" Windows server because the market has spoken and it is on its way out!
Intel is in a dangerous position right now. They have the capital to recover but they must be prudent. Many people don't realize how good of a design Ryzen is for the upper-upper end. When Intel needs to make a 16 core chip, they have to make a large 16 core die. When AMD releases Threadripper and Epyc, to get to 16 cores, they can just connect two 8 core chips together. AMD calls it Infinity Fabric. The cost to make a 16 core die is significantly higher than the cost to make two 8 core dies. And that is even considering Intel has some of the best fabs in the world. The Core i7/Xeon may be faster for games, it won't be able to compete with AMD for price/performance. A 16 core Xeon might be better than a 16 core Epyc, but not $1000 better. And with the Ryzen design, AMD can make a 32 core CPU, price as much as a 16 core Xeon, and still make a huge profit.
Add to all that the pressure of ARM CPU's. I don't know how the future plays out, but if people ever decide they don't need legacy support, ARM has an easy path to desktops/laptops.
And why did Intel make Thunderbolt an open standard? What did they gain? This was a prized plum for them. It kept Apple locked in to Intel. But Apple has been getting closer to AMD lately. Look at the new iMac pro with a Vega GPU in it. Some people feel that Apple put pressure on Intel to open Thunderbolt. If so, Apple could be using a future Ryzen APU as leverage for better prices.
Intel has the money and engineers to copy AMD's design. But even if they started today, it still would be years before such a design came to market. Intel won't be able to put illegal pressure on OEM's anymore so they will less AMD like in the Athlon 64 days. If they do, the fines probably wouldn't be worth it this go around. The best they can do is to continue to pay for ads for OEM's like they do now. (That is standard in businesses. My friend has a HVAC business. When he sells a lot of A/C's in a particular brand, that company buys an ad for his business proportional to the amount he sells.) At least Intel understands marketing, unlike AMD.
Intel better plan for the future well. We need Intel. And AMD. And Qualcomm. And NVidia. And other CPU/GPU companies. When there is competition, we all win.
In my microelectronics experience, a custom/task specific micro es 10-30x as efficient as a non specific part.
Of course this is only for the processor.. then you have memory, fabric connectors, etc.. I still believe you could pull a 5x efficiency on a HTTP or mariadb or whatever task specific task you can give the micro.
It is a question of determining the instructions being executed by the micros, and optimizing as much as possible the platform for those.
It does not make sense for the kind of operations I run, but it certainly would for google, amazon, etc.
Of course, this creates additional problems.. as then changing your stack becomes slow and expensive... your processors are custom made for your stack!!
I would prefer we dont go that route, as those are benefits we the non titanic size operations, would not benefit from.
We currently benefit from the investments these companies do: we can also buy these processors and put them in our servers, or rent them.. but if they all go custom, we would be left to run "legacy" platforms, and unable to compete.
And if the cost of entering the market and exiting is huge, there would be no free market, but and oligopoly of internet companies.
Your linked article is not evidence. It's hypothesis.
Specifically, it's a calculation based on published ratings and some tests (not linked or adequately documented) which is assumed to give a good estimate of performance per watt: "We are not pretending that our calculations are 100% accurate, but they should be close enough."
There are also a lot of "probably"s and "assume"s elsewhere in that article.
Evidence would be actually running specific loads on specific systems and comparing those results.
The whole idea of "x is more efficient then y" is incomplete, lacking the vital qualifier "at z."
"We are not pretending that our calculations are 100% accurate, but they should be close enough."
That comment was made re a decision whether the X86 Avoton or the X86 Xeon was more efficient. The ARM part was one third as efficient.
Here's another hopeless but much hyped ARM server product:
" The expected performance and power consumption are most likely not competitive with what Intel has available".
And that's from Johan DeGelas - a long time ARM server cheerleader.
Why do you keep using the single course "andandtech" as evidence for your claims?
If you can provide multiple different sources then you rule out the possibility of bias.
You are aware of bias I hope?
Doesn't matter who the author is, it's the publisher that decides what goes in there, they have the final say.
A few rounds of golf, and the advertising budget is brought up, Intel has a lot of money for advertising.
I speculate, of course, but it's a possibility, one that you need to rule out by ceasing to rely on a single source for your claims.
Re: 4 small chips = 1 big chip ... If this was true then the world would have been full of Muti-Chip-Modules many decades ago: And its not :-)
And who says that 8 Cores in 200sqm marks the best price-performance compromise ??
Makes you wonder why Intel invested all those billions in defect reduction, doesn't it ??
"And who says that 8 Cores in 200sqm marks the best price-performance compromise ??"
is it even POSSIBLE for Win-10-nic to run 8 cores like that? What, with Micro-shaft's ridiculous licensing policies, etc..
Intel should market desktop Linux and multi-core-ready applications to get CPU sales up. Just sayin'.
As for competition, let Micro-shaft emulate all they want. A good native Intel architecture will outperform emulation any day. you can also think of it as "validating the standard".
On Ryzen: I am one satisfied customer!
AdoredTV on YouTube has a good summary of events: the design of the Infinity fabric is to make use of the fact that smaller cores give higher yields (so you can get more chips per wafer, which means lower costs per unit, and more working cores per wafer). For example: an 8 core Ryzen is 4 units tied together, ThreadRipper will be 8 of these tied together, Epyc 16 tied together (and one rather large chip area) - but the main point is that the Zen architecture is shared up and down the line for everything - making development costs a lot easier for AMD. If this design holds up to what is asked of it: AMD can wheel out multi-core chips at a faster pace, throwing more cores in as needed.
So yes: AMD this time around appear to have a good formula, and a solid plan for the CPU division, backed by a whole lot of engineering into a fresh architecture. Hopefully they get better on the marketing.
This is not a problem for industry.
Who cares about a dying crap operating system (Windows) that is tied to a particular CPU architecture?
Linux and all its code including that written for other *Nix's is portable.
You can talk about legacy software that only runs on Windows if you want to but if you have that in your enterprise then just continue to run it on your current hardware and replace it with something Linux based moving forwards.
I believe that the reason for this is that x64 is an extension of x86, not a drop-in replacement.
As a result, x64 processors based on the AMD64 architecture can still run x86 software, in contrast to Intel's IA64 architecture (the infamous
Itanic Itanium) which was 64bit native and incapable of running 32bit software.
This does mean however that there is an unremovable dependancy on the x86 bits of the processor.
My DEC Alpha workstations running NT4 all included a bit of kit called FX!32 that translated x86 binaries through a JITC translator into native Alpha code. It stored the results in a cache file so that subsequent executions didn't have to retranslate the same code. Translated programs ran at about 80% of the speed of native apps. It was such an important service that Microsoft included it in NT5/W2K. That is, until the Alpha was killed right around RC1.
This was back in '99, two years after the release of MMX. I don't recall if it converted MMX instructions. And it appears as if patents on MMX and SSE might be the sticking point.
Still, Qualcomm might be able to force Intel to license them based off of F/RAND rules if they can convince a judge that Intel's ISA meets the criteria of being an industry standard that requires licensing. Or they might withhold licensing future patents from Intel until they get a cross-license deal in return. I guess that's all up to the IP lawyers now.
The FX!32 process is exactly how CEMU emulates the Wii U (GPU shader instructions instead of CPU instructions, the PPC > Intel bit is 'pure' emulation).
Nintendo are very, very into litigation, CEMU is big enough for the Big N to go after it should it decide to. It hasn't because it knows it would lose (not to CEMU dev's but to the IFF which would back it to the hilt).
Intel is just huffing and puffing, I don't think the PC manufacturers will listen this time, Intel's pockets aren't THAT deep. Intel Inside worked by throwing money at the problem (illegally), not a threat of litigation.
I seem to recall, but I could easily be wrong, that Transmeta's approach was vindicated by the courts so presumably anyone who licensed that would be reasonably safe.
But I think there are other established methods of code interception and emulation such as that used by Rosetta in MacOS, that could be employed in software with just some kind of hardware accelerator. For Microsoft the biggest hurdle will be the shitty x86 instruction set, which is out of patent and for which they probably already have a licence. For anything that really requires SSE and similar optimisations where software emulation isn't fast enough recompiling might solve the problem, especially if .NET is being used correctly. But somehow I don't think that video encoders from 2002 are going to be high on the list of must run software.
Intel's defence would be to get an injunction on the sale of devices but there are risks that it would be slapped down or that it would be limited to devices sold in the US. That Qualcom is a major supplier to US Department of Defense could never influence any court, could it?
Microsoft's risk is being shut out of the fast growing mobile market altogether.
It's a long time ago but AFAICR it wasn't anything to do with the instruction set but with presenting the processor with a consistent API to the hardware. The kernel is still compiled to the native code of the processor but the developer doesn't have to worry about what sort of bus etc that the CPU can see. I'm not sure how this relates to the drivers; maybe their job is to present the HAL API.
Who downvoted ? I did, the following makes absolutely no ff'ing sense, so much so, I stopped reading and downvoted:
in that a program complied for one CPU could run on a version of Windows based upon a different CPU
1. HAL was there to present a single interface to drivers, on multiple platforms.
2. You still had to compile the software/drivers for the target Windows/platform combination
The question at the end is not really relevant since you got the rest all wrong, still, a simple search yields quite some interesting stuff, right:
Obligatory Space Oddity reference: "I'm sorry Dave, I'm afraid I can't do that"
The HAL was designed to implement all the low-level OS functions so the rest of the OS would not need to change if the underlying OS architecture changed.
HAL implements platform specific details like I/O interfaces, interrupts, etc. There may be even different HALs for the same platform (i.e. x86 with only one processor or multiple processors, or different interrupt controllers).
Even low-level drivers usually are built on top of HAL too - i.e. they will use HAL calls to perform I/O and manage interrupts.
ONe notable exception is the memory manager - it's not implemented inside the HAL, although it's of course an architecture-specific module.
But everything - HAL, memory manager, kernel, drivers, applications - need to be compiled for the actual CPU. The abstractions just simplify the code changes needed to support different platforms.
This technology is not exactly unknown.
The AS400 had a "Machine Interface Layer" that implemented in effect an object orientated instruction set. Compilers generated code for this machine, which was visibly documented.
During it's life the underlying processor went from a proprietary, completely undocumented hardware system, to a POWERPC processor (IE same as the AIX *nix boxes). Provided any custom software was compiled with the necessary options (basically readable symbol table) you could copy the object code to the new processor and on first running the machine would do the conversion the one and only time it needed to be done. That was in the mid 90's, when clock frequencies were an order or two slower than today.
It's not that you can't make efficient cross platform OS's.
It's that MS does not really want to. BTW IIRC Windows 95 had 21 layers of abstraction between a disk read and getting the data back to y our application.
And while it continues to maintain the psychotic Bromance relationship with Intel it never will.
There's liars, damned liars, and general counsel.
"... and we are confident that Intel's microprocessors, which have been specifically optimized to implement Intel's x86 ISA for almost four decades, will deliver amazing experiences, consistency across applications, and a full breadth of consumer offerings, full manageability and IT integration for the enterprise," general counsel Steven Rodgers wrote on Thursday.
Has anyone noticed that Intel CPUs seem to get very hot when they're invited to, er, do anything?
No need to do that. AMD can simply manufacture a double quad core. Four x86-64 cores and four ARM cores. Running on ARM exclusively most of the time, running x86 code emulated if needed, and switching on x86-64 cores only if more performance is needed. The only problem is cost, I don't think such a CPU would cost less than $100, and that really precludes all low-end segments and much of midrange, too.
The earliest Intel competitor I remember is Zilog. Their Z80 (which powered my first computer, a QDP-100) was a better 8080. But I looked it up, and Zilog's greatest processor Z8 (count the zeros: four) was not compatible with Intel's 32-bit instruction set. Just in case we're in why-spend-kazillions-on-litigation-when-we-can-buy-a-defunct-product-for-a-song mode.
I've preferred Intel to AMD because of the perception of better energy efficiency. But reading this thread, I'm getting the impression that Intel is more efficient when the processors are idle. ? My personal embarrassment: +1.
Patents are only good for 20 years. So, it's perfectly legal for Qualcomm and Microsoft to emulate Intel's 32-bit architecture. I bet that's why they're only supporting 32-bit processes on the emulator. The 64-bit instructions are still patented, as are SSE. So, they can't add support for those, yet.
Biting the hand that feeds IT © 1998–2022