> Apple would then have to argue in court that their logo looks like a dildo.
Or, Apple could just admit in court that they are, in fact, a dildo, and move on.
905 posts • joined 14 Nov 2014
> I could be Jesus if someone would just make me water walking shoes.
I don't know how this relates in any way to VLIW and Itanium.
I explained why the performance of software compiled for Itanium was poor. If you don't like the explanation, or don't understand it, invent another one for your own use, and move on.
> For x86 being a CISC frontend that is translated to RISC micro ops since the P6, I'm basing it on Intel technical documents.
No, you're not. This legend about Intel being a RISC machine hidden inside a CISC machine is a piece of poorly manufactured fiction being peddled on web sites such as Ars Technica or AnandTech. These are not reliable sources of information. I've been reading this online story for at least 10 years. It is just as incorrect today as it was five or ten years ago.
Same goes for your P6 theory.
If you have links to Intel documents stating anything remotely close to your RISC inside CISC architecture theory, or your P6 theory, please provide the URL(s) here.
> [ ... ] VLIW relied on memory/cache throughput to deliver performance and it lacked this [ ... ]
That's not true at all. Not for Itanium and not for VLIW CPU's in general, either.
Itanium's particular flavor of VLIW delegated the responsibility for static parallel instruction scheduling to the compiler. Unlike super-scalar CPU's that schedule their instruction pipeline dynamically (read: at run-time), in silicon.
Both approaches (VLIW vs. super-scalar) attempt to deal with ILP.
It is possible to design a hybrid super-scalar/VLIW CPU. Not sure what the advantages would be. You end up with a super-scalar CPU anyway.
Static parallel instruction scheduling was Itanium's single point of failure. Itanium compilers were really bad at parallel static instruction scheduling. Mainly because there was very little - to zero - experience in writing that type of scheduling backend. That, in turn, killed any performance benefit that could be derived from Itanium's VLIW architecture in the first place. That's the real reason why software compiled for Itanium performed so poorly. And that's the real reason Itanium software had to be re-compiled every single time there was an Itanium iteration update, or a compiler upgrade.
Yes, in hindsight, Itanium's designers shouldn't have delegated parallel instruction scheduling to the compiler, simply because it turned out to be almost impossible to do correctly. In the end, Itanium instructions ended up being scheduled just like on an in-order CPU. Goodbye parallelism, which was the point of having a VLIW CPU in the first place. But, the thinking at the time was that it would be possible. Maybe with another 5 years of sustained effort it would have been.
You're wrong on the P6 assertion. Current x86_64 core microarchitecture (Core2) has nothing to do with P6 (i686). The list of differences is simply too long to type here. I'll just mention one major difference: the length of the instruction decoding pipeline.
Please stop repeating the incorrect drivel about x86 being a CISC front-end on a RISC backend. It's not, and not by a long shot. Who comes up with this stuff?
RISC micro-architectures are load/store architectures. x86/x86_64 is definitely not a load-store architecture. It never was, and probably never will be.
> [ ... ] CPU not able to reach announced performances [ ... ]
Yes the performance of the first iteration Itaniums was quite terrible. OK, let's be more precise: the performance of software running on the first iteration Itaniums was terrible. But that's to be expected: Itanium was a completely new architecture.
Writing a compiler backend with an efficient codegen for a VLIW CPU was, at the time, (a) super-bleeding-edge new, with not a lot of prior experience to draw from, and (b) very difficult. It still is.
Itanium wasn't designed to be an instant replacement for RISC CPU's. It was more of a forward-looking future development direction. In the end, Itanium's underlying motivation proved correct: RISC CPU's - and SPARC is no exception - did hit a performance wall. Most of these RISC architectures died along the way.
A lot of concepts from Itanium's instruction scheduling were re-targeted by Intel in their Core/Xeon x86_64 architectures. So, in final analysis, Itanium wasn't a complete waste of time. There's quite a bit of Itanium under the hood in today's Intel x86_64 cores.
> Sun, as a competitor was not involved in Itanium.
Yes, Sun was involved in Itanium. Sun had a port of Solaris to Itanium, back then.
Several groups ported operating systems for the architecture, including Microsoft Windows, OpenVMS, Linux, HP-UX, Solaris,Tru64 UNIX, and Monterey/64. The latter three were canceled before reaching the market. By 1997, it was apparent that the IA-64 architecture and the compiler were much more difficult to implement than originally thought, and the delivery timeframe of Merced began slipping.
Which means Sun had an Itanium compiler. A.k.a. the Sun compiler. You can read the long list of contemporary references to Solaris Itanium on the Wikipedia page.
As usual, after promising everything under the Sun (pun intended), Sun backed out of their commitments to deliver Solaris Itanium, because of SPARC.
Solaris 2.5.1 included support for the PowerPC platform (PowerPC Reference Platform), but the port was canceled before the Solaris 2.6 release. [ ... ] A port of Solaris to the Intel Itanium architecture was announced in 1997 but never brought to market.
I still don't see how Itanium is of any relevance to AMD, or vice-versa. AMD was never part of the consortium that worked on Itanium.
Sadly, ElReg seems to have been overrun by clueless bullshit artists who will write just about anything to promote falsehoods.
> Intel's Itanium play failed spectacularly [ ... ]
Yes, in the end it did fail miserably. A few good things came out of it, though.
But, Itanium's failure didn't make a dent in Intel's profits.
And, in the interest of full disclosure: Intel wasn't solely responsible for Itanium's failure by any stretch. Itanium was supposed to be a joint project.
A lot of well-deserved blame for Itanium goes to HP, and to Sun Microsystems. Instead of producing something constructive, Sun decided to play the obstruction game. Because they believed it was in their best interest (SPARC64) at the time. Sun played that game a lot, back then, and it always ended in failure. Undermining Itanium didn't save SPARC.
> Why should that be given airtime?
Rule #1 for CEO's testifying live before Congress: Never answer the question.
It's given airtime because CEO's should be afforded the opportunity to answer questions. The fact that they don't is their choice.
Absent relevant input, Congress is now free to legislate any which way it wants. And that's the part that we never get to see. It involves behind-closed-doors lobbying, influence peddling and campaign financing.
The results are what we all know them to be.
> So no, I don't think Microsoft is doing a good job. They are serving themselves as usual.
They're a for-profit corporation that has decided there's value - for them, and only for them, at this point in time - in playing the Open Sauce PR/Marketing game.
I for one didn't, and don't, expect anything else from them. Advantage: me. Microsoft can't possibly disappoint me.
FWIW, they cut the price of GitHub personal subscription to $4 from $7. Boo-Ya!
> Cost, efficiency, size, features, and no doubt plenty of other things depending on the use case.
Cost is not a SPEC parameter. And it's not quantifiable anyway. Both Intel and AMD charge what the market will bear. And the price paid has nothing to do with the price advertised anyway. So, we don't know what cost even means here.
Size is not a SPEC parameter. I don't even understand what size means in this context. Size of the die? Size of the chip itself? Size of the chip socket?
Define efficiency. Did you mean power consumption? It's not a SPEC parameter. SPEC actually has defined an output for power consumption, but no-one ever reports it.
Features? What features? What does features mean here? Is there a list of clearly defined features?
no doubt plenty of other things depending on the use case: What other things? Can you list them? I use AMD chips as coasters. Is this a valid use case?
You do not appear to have a minimal theoretical grasp of what a benchmark is. You keep mixing in nebulous and undefined terms -- features -- that appear to suit your confirmation biases of the moment. And when faced with the actual results of the benchmark -- namely numbers produced in a controlled environment that followed the evaluation specs -- you conveniently ignore them, if they happen to contradict your expectations bias. Case in point: So what if Xeon objectively produces better benchmark results?
Or you counter them with undefined terms for which no information is available. Case in point: features.
All of this tells me three things:
- you've never run a benchmark of any kind.
- you've never been tasked to run a benchmark of any kind.
- you can't be trusted to run a benchmark of any kind because you are incapable of isolating your expectations bias / confirmation feedback loop from the benchmark results.
Meaning: if you are faced with an outcome that does not match your expectations bias, you will intentionally skew the benchmark results just to confirm your bias.
> The server targeted parts indeed lack video ports [ ... ]
Nope. They don't lack video ports at all. They have at least two video ports.
And thusly, you've just announced to the world that you have no clue what you're talking about. Evidently you've never seen one of those NVIDIA boards that's targeted for the purposes you describe. But you're an expert.
Too late now, but why don't you go take a peek at NVIDIA's site - or Amazon. They have pictures of those video boards that are used as GPU co-processors.Yes, all the models have HDMI out ports. You can attach a monitor.
As for myself, I installed one of those super-expensive NVIDIA boards just last week in one of our boxes. Because I'm playing with CUDA at work.
> It's not like you can plug a monitor into these things.
Really? You can't plug a monitor into an NVIDIA card? Or a mobo with an on-board NVIDIA GPU? Not even a little bit?
> If a chip is designed and built to go into a server, then it is by definition, a server chip.
Awesome! That clears it all up. Keep'em coming, mate.
> [ ... ] the latest Epyc are regularly scoring in the 500 for FP Rate and near 200 on FP Speed.
The AMD benchmark was run with 128 threads.
The Xeon benchmark was run with 96 threads.
Both benchmarks scored a peak ratio of 212.
What follows from these results it that Xeon vastly out-performs Epyc on SPECspeed 2017 FP.
The higher the number of threads, the higher the score in the SPEC ratio computation. So: if Xeon manages to score the same ratio with a lower number of threads compared to Epyc, it follows that running the Xeon benchmark with the same number of threads as Epyc would necessarily score higher.
I didn't have the time to search through the hundreds of submitted SPEC results and find the absolute perfect optimal submission for either manufacturer.
> Or was that the only metric that supports your view?
No, that's not why I only mentioned SPECint speed. I only mentioned SPECint speed because mentioning all four benchmark sets would have taken waay too much space with links and all. And because SPECrate performance is much more dependent on the performance characteristics of the system as a whole, as opposed to SPECspeed.
> Epyc seems to do pretty well in FP rates [ ... ]
pretty well is besides the point. Does it do better, the same, or worse than Xeon? From what I can see, it still can't beat Xeon.
> Erm, yes they do.
[ In response to NVIDIA doesn't make server chips ].
Granted they don't make CPUs (yet), but these are still 'server chips' [ ... ]
First, you're contradicting yourself.
Secondly, I said nothing about revenue. I don't care about revenue. The article is about Intel's 7nm fab process, not about revenue.
Thirdly, these aren't 'server chips' anymore than they are 'laptop chips' or 'desktop chips'. These are GPU's. Do you know the difference?
Lastly: do you work in marketing somewhere by any chance?
> Which is why you're getting all the downvotes for pushing SPEC as the only benchmark that matters.
Or maybe it's because the vast majority of commentards here don't understand the difference between an industry benchmark based on some objective parameters, and a personal opinion based on hormones.
That's like downvoting a blood test because you don't like the results.
If SPEC was as irrelevant as you claim it is, why are there so many official SPEC submissions from the industry?
> [ ... ] no-one actually cares because a performance benchmark without taking other factors into account is completely meaningless.
What other factors? Care to enumerate them?
If anyone has a better performance benchmark in mind, propose it, and have it accepted by the industry. Until that happens, SPEC is the only one we've got.
> Erm isn't that a per-thread figure?
The SPEC ratio is a number that is generated by SPEC software. It represents the relative performance index of the submitted result. For SPEC speed, the ratio takes into account the number of threads, if the benchmark contains OpenMP parallel blocks of code (i.e. threads). The higher the ratio, the better the relative performance. Not all SPEC speed benchmarks use OpenMP. Many do, but some don't.
Security concerns, or cost of hardware or software aren't part of the SPEC benchmark parameters.
> From this article in March, looks like the AWS Graviton 2 [ ... ]
Only results submitted to, and published/verified by, SPEC are valid.
Claims made anywhere else about SPEC performance results that aren't submitted to SPEC are just marketing bullshit. Certain criteria must be met in order for a SPEC benchmark result to be considered valid. One of the criteria is repeatability. There are several other criteria.
If the claimed results weren't submitted to SPEC - and they weren't, because I searched for submissions on ARM64/AArch64, and there aren't any - that tells me everything I need to know about their validity.
Yes, it's terrible optics that Intel's 7nm fab process is so b0rked, but:
- Apple doesn't make server chips.
- core-i9 is still a good proposition for desktops/laptops.
- NVIDIA doesn't make server chips.
- Marvell - good luck to them with ARM64. So far, no-one's ARM64 is beating Xeon on SPEC performance.
- AMD - doesn't beat Xeon on SPEC performance either.
The relevant benchmarketing standard here being SPEC.
Intel routinely exceeds a peak ratio of over 12 on SPECint 2017 speed, while AMD maxes out at 10.5. Link.
Some people dislike SPEC because they say it's irrelevant. However: SPEC is not a collection of purely artificial programs written specifically for benchmarketing purposes, and with no connection to real-life software. All the benchmarks in SPEC are real, mostly open source, programs, that were originally written for clearly defined practical purposes. When in Rome ...
Maybe nm bragging rights don't always translate into performance.
> Yet we are supposed to replace them both with a single word [ ... ]
Calm down. No-one is forcing you to replace anything. VMWare is throwing a tantrum over some certain specific words. You're throwing a counter-tantrum over VMWare's original tantrum. I'm not sure that an impartial observer can tell the difference between the two.
- abort(3C) is in the Standard C Library.
- kill(2) is in POSIX.1-1998
- Most of the common commands in /usr/bin are either in the Single Unix Specification, or in POSIX.
So, none of these can be renamed. Commercial UNIX can't rename them because they'd no longer be SUS Compliant. Linux won't rename them because Linux is mostly SUS/POSIX compliant - they just don't care about getting an official certification.
There's also the practical aspect of breaking every single program or shell script ever written.
Just because someone at VMWare lives in a fantasy-land it doesn't mean that whatever newspeak directive they wrote is going to become reality anytime soon. Or ever. VMWare doesn't even write or publish an operating system to begin with.
> Does my background matter in this?
Yes, it does. Are you speaking as a subject matter expert, or are you just opining.
> I've provided links to support most of my claims.
You provided a link to a political analysis document written by The Heritage Foundation claiming 1071 - one thousand and seventy-one - fraudulent votes out of slightly more than 130 million votes cast in 2016. Your point is?
The other links you provided aren't political analysis documents. They are opinion articles.
Being from the US certainly gives me significantly more insight into US politics than you would ever get as a pure spectator from far away.
I don't opine on UK voter fraud. I don't live there, I don't vote there. And I'm not a subject matter expert on UK politics or the UK voting system. I don't really know how the UK political system works in real life, save for some generalities I learned in school and at uni. And some personal observations I made - purely as a spectator - while I lived there. And what I read in the media. None of these make me an expert. So, as a distant observer of UK politics, I refrain from pretending that I am an Internet UK politics expert.
> I'm not from the US.
Are you an expert on US politics?
If Yes, I'm genuinely interested in learning how you reached your conclusion regarding voter fraud in the US. There's gotta be some URL's somewhere that you can share. US voter fraud sounds like a pretty interesting topic of research for a political scientist.
If No, WTF are you talking about?
The only person pushing the narrative of mail vote fraud in the US is Donald J. Trump. Without an iota of evidence.
He even established a Commission in 2017 to study his pervasive voter fraud in the US claim. And come up with a report that would please him. Packed it with supporters. They disbanded after about a year and a half without ever producing a report of any kind. Chris Kobach was the chairman of that commission.
Why are you pushing the same narrative? If you have proof of mail voter fraud in the US, please provide it here. Once you provide that proof it won't be a narrative any longer, it will be fact.
If you don't have proof - which is what I suspect is the case - you're just pushing an extreme-right narrative with no basis in fact. Why are you pushing it?
Law and order, pervasive voter fraud, national security, xenophobia, nationalism, exaggerated patriotism, these are all defining narratives pushed by demagogues aspiring to establish a right-wing authoritarian regime. Some of them succeeded. Why are you falling for it?
Aside from the number of infections - which is rising in most states, his handling of Covid has been terrible.
Teaching - schools and uni's have been closed since mid-March. Of course he's now pushing the re-opening of schools and in-classroom teaching. His official line is Covid is just like the flu.
Going into September - October with schools still closed and the number of infections increasing monotonically, how does that look at the ballot box?
I feel very little (read: zero) sympathy for his supporters. He was a known pathological liar in 2016, and they still voted for him. They will still vote for him in November. A bit late to have an epiphany now and start complaining about being misled.
I don't care if his supporters end up infecting each other with Covid because they refuse to wear a mask and don't observe social distancing rules. Follow his model. He is your Leader. Go visit Texas, or Florida. Two of the first states to declare "Covid is over!" back in May. Now they're running out of ICU beds in hospitals. Don't wear a mask, and pretend it's just the flu. Go to Covid parties to get infected on purpose, wearing a MAGA hat. In their world, it's patriotic and a celebration of freedom.
> It therefore has both informative and educational value.
I'm happy it works for you. It doesn't work for me.
Lose the two-cent Internet psycho-analysis. No-one appointed you - or any other of your fellow Nixon fanbois - arbiters of truth and taste here.
If you find fake videos featuring Richard Nixon informative and educational, perhaps you should seek professional help.
> The fact you're the only seeing it as humorous suggests maybe any bad taste is all on you.
I don't see it as humorous, or funny. At all. I have already stated that several times. And which is why I gave the Goebbels example.
However, its authors clearly did. As proven by the fact that the video has no other possible purpose. The video has no educational or informative value. It's either (a) a failed attempt at humor or (b) a tasteless ego trip along the lines of Look At What We Did! We Smart!
In either case, it's inappropriate.
Are you being intentionally dense, or is it just your natural state?
> [ ... ] the article never said it was a joke or implied that there was any humour in it. [ ... ]
The humor is implied in the fake video. I never said anything about the article. Are we having reading difficulties here?
You'd have to be willfully blind not to understand the failed attempt at humor by absurd that's both implied, and explicit, in the video.
Let me give you - and those others who don't understand - a clue about why this video is inappropriate and in bad taste: for the same reasons a deepfake video of Joseph Goebbels reading the Universal Declaration of Human Rights would be (a) not funny at all and (b) in bad taste. There would be exactly zero educational point to it. Just like there's exactly zero educational purpose to this MIT pile of garbage.
> The fake video is a learning tool with no intention of having any bearing on reality
That's a lame excuse.
Richard Nixon does not belong in any kind of humor-based learning tool, or educational material. The implied humor - of which I see none here - is in piss poor bad taste. Whether it's based on reality or not is completely orthogonal.
One can educate and/or create humor without the need to manufacture fake videos of war criminals.
I will raise the issue of your own issues. You seem incapable of understanding how inappropriate this so-called learning material - which it really isn't - really is.
You also appear to be very quick at assigning an Internet Psychologist diagnosis onto anyone who happens to hold a point of view that's different than yours. That must be based on the implicit assumption that you are always right, and that anyone who disagrees with you is wrong.
Would you like to learn what I think about Internet Psychologists?
> Might want back and re-read the article [ ... ]
I read the article. This has nothing to do with the article anyway. It has everything to do with the video, and its authors.
There is nothing remotely funny or entertaining about Richard Nixon, in any materialization: real, fake or deepfake.
Might want to go read some history about Richard Nixon and what he was directly responsible for.
I'll give you one hint: more than 1.2 Million Vietnamese/Cambodian/Laotian civilians killed by mines, carpet bombing and chemical warfare (Agent Orange / Napalm) during the Vietnam War, on Nixon's orders. I'm not counting the US Servicemen dead, left behind, or who came back damaged beyond repair. While he was proclaiming at home that everything was going just great, the number of casualties was going down sharply, and victory was just around the corner.
That's just one of his accomplishments. He has many others. Watergate, Southern Strategy, War on Drugs, Law and Order, to name a few.
Still find him entertaining?
I can only guess that the brilliant minds from MIT who came up with this doozy do not remember, or even know, who Richard M. Nixon really was.
What would have been the reaction had they made a deepfake video about Trump? Still funny?
The main difference between Nixon and Trump being that Nixon wasn't a moron. The rest was more or less identical.
> Tensorflow [ ... ] holy good grief this is utter pile of cobbled together shit
Not just Tensorflow. I could name two other well-known ML frameworks that are just as bad, if not worse. One of these other two doesn't believe in documentation of any kind.
Which makes me wonder if anyone has ever gotten anything useful out of these ML frameworks. When it comes down to it, everything seems to reduce to (a) multiplying two matrices and (b) AI'ing a cat photo.
Here's a cat picture. Tensorflow confirms it's a cat picture.
To that, add the mandatory overhead - ranging anywhere between 4 hours to a full week - caused by debugging broken Python crap that needs fixing.
Yeah, OK. $5 board. And Arduino. Thanks.
The Amazing Blinking LED Show, powered by Arduino RISCV.
Is the difference between "someone makes a $5 RISCV Arduino" and a board with a RISCV chip that can be used to build a PC that can boot a usable Linux distro, and can be used for kernel or compiler development, really that obscure?
No-one is going to be able to develop and/or test a RISCV compiler backend or the Linux kernel on a $5 Arduino board. If you don't have a compiler, you can't have a kernel, or a distro. It's a catch-22.
I built LLVM on Fedora 32 RISCV running in QEMU, just for kicks and giggles. I gave QEMU 128GB RAM and 8 RISCV CPU's. It took 9 days with gmake -j4. Does that sound like a usable development environment?
> RISC-V does have a downside in that you're fairly limited for SoC suppliers.
As in none. SiFive made a limited number of development boards in 2018 - the SiFive Unleashed. They are sold out, no longer available, and they were ridiculously expensive to begin with: USD $3000 for a development board.
Combined with the fact that the RISCV spec is (a) unfinished and (b) incomplete. And that no chipmaker has announced an intention of fab-ing a RISCV chip. And that no-one has announced an intention of making a RISCV board.
You can boot Fedora 32 RISCV64 in QEMU. That's about it. It's barely usable, even by QEMU virtualization performance expectations.
So yeah, it's nice to talk about RISCV as an alternative to ARM, but the reality is that there is no RISCV. It's just a collection of PDF's.
> Someone put in a LOT of effort to discover vulnerabilities, design, test, manufacture and distribute a knockoff product, one with an inherent level of sophistication and complexity.
I can count on some of the fingers of just one hand the number of countries that would be interested, and willing, to do this.
... you had to write - from scratch - the recursive version of Dijkstra's Shortest Path, in 10 minutes or less, while taking heavy incoming artillery fire from the enemy?
Yeah, me neither.
That's about how useful and realistic these whiteboard code tests scenarios are.
> It's perfectly possible to write reliable, efficient bare-metal code using Rust.
No. It's not.
If one's main motivation for advocating Rust over C as a systems programming language is one's inherent inability to handle C pointers correctly, I would suggest that systems programming falls outside one's area of competence in the first place.
> Anyway, build hello, world (hw) as a std binary with dynamic library loading and suddenly it's <15kB.
%> ldd ./hello-rust
libdl.so.2 => /lib64/libdl.so.2 (0x000014fc720b7000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x000014fc72096000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x000014fc7207b000)
libc.so.6 => /lib64/libc.so.6 (0x000014fc71eb5000)
%> file hello-rust
hello-rust: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=f58dbc13cc7cd51f02dd36e98cef083bf38be942, not stripped, too many notes (256)
Don't quite understand what "dynamic library loading" really means here. Unless it's explicitly statically linked - with -Bstatic, which isn't the case here - the default for Linux executables is dynamically linked. Which is what
If it was statically linked, ldd would print "not a dynamic executable" and report no dependencies. Nor would there be any DT_NEEDED dependencies recorded in the ELF.
> By default Rust enables symbols and some other things [ ... ]
What are these other things?
Symbols have nothing to do with the programming language. They exist - or don't exist - independent of the language the program was written in. FORTRAN programs can have symbols. C programs can have symbols.
For this particular case, the C and C++ versions of the program have symbols too. I didn't strip them.
At any rate, symbols aren't written to the
.text segment. So: No. Nothing to do with symbols, panic,
abort(3C) or anything like that.
Userland programs don't panic. Only the kernel panics. Userland programs simply crash, for various reasons.
Why is the C version of the program 15 times smaller than the Rust version? Both versions of the program do one, and just one, same thing: they call
write(2) to print an identical character sequence to the
stdout file descriptor.
Just useless bloat caused by a sloppy language.
Biting the hand that feeds IT © 1998–2020