Re: Coating
Of course, you know that buying a new printer is cheaper than buying a cartridge because the bundled cartridge is smaller than the refill one
90 publicly visible posts • joined 9 May 2018
How can people not having learn (whatever) english as a foreign language tell us (those who did) how this teaching is done ? What I learnt in school: First UK english (and BBC as reference), second, once first step OK, US english.
Beside, as other commentards wrote, if I want a global IT site, there are lot of choices.
What I appreciated with El Reg was its (unique) UK aspect.
My feeling is initial flavor/flavour is not totally gone, and I agree with claim that each of you should write in its own "dialect" (but for units, of course)
Of course I am aware than for Y > X a ZenY core has a better IPC than a ZenX core
However, would I have to pick a CPU, what should matter is: Instruction set / RAM interface as well as IO (including chipset) / Power usage / Price
Then I expect all 5*00U providing same instruction set and be socket compatible.
Plus, a correlation between numerical order and performance level.
If this is not the case, then this is an AMD fail.
If this is the case, I do not mind Zen level (whatever my admiration for continuous IPC increase between generations)
My feeling when having work relationship with Broadcom ? First answer to a prospect: sending a NDA.
Another company send me e-mail address where I could ask question.
Guess which company I recommended.
As I never worked with VMware, I can say nothing for them.
(my previously VMware employees colleagues did not look to keep a bad memory of their experience, though)
Quote: "easier for punters to understand what they are buying"
Me, I may buy phone(s), or tablet(s), or computer(s).
Does the name of CPU matter ?
Maybe (maybe) for a computer.
But for a phone or a tablet, I read reviews to know if performance is OK, that's enough.
And maybe I could follow same process for a computer.
Oh no.
(letting apart the case of native code)
Java as a source language, yes (though you should have a look at Kotlin).
But Android does not run a standard JVM:
Java bytecode is transformed into something very specific to Android before your phone dares to execute application.
What matters for me as a driver is to be aware there is something in this direction, as well as how does this something roughly looks like.
As long as there is a big enough surface of really transparent stuff (a.k.a. windscreen and windows), a blur, not invisible, pillar area may be enough
Thanks for the clarification, I assumed kernel code was compiled to "preferred" register size of CPU.
Well, I reckon this is ambiguous for MIPS, but in my remembering (which can be wrong) Alpha was kinda 64b only with support for 32b.
I totally agree regarding endianess. Actually, I almost noticed it, but I did not just to stay short.
Anyway, NT kernel was able to run of beast with very different ways of interfacing with peripherals, for instance, and my I never heard my colleagues writing drivers saying they had to take care of this.
From the ground-up, Windows NT was built as really really portable.
Proof are its 4 initial platforms:
- 32b x86 (nobody had ever though of a 64b x86 at that time
- 64b Alpha
- 64b MIPS (I got one at work!)
- PowerPC (I saw references inside doc., even possible one of my colleagues used one) I do not remember if it was 32b of 64b
Itanium came later.
To me "is for use in applications that require fast and/or light structured data storage, where raw file access or the registry does not support the application's indexing or data size requirements." is a good description of sqlite.
Any comment from people working with this kind of tool ?
I read the whole set of comment and I saw a single reference to timing attack
(of maybe 2 if you assume the sentence about probing generated assembly was, too)
Now, I have a question for people proposing others languages:
Is there a way to make sure a function can be guaranteed to execute in a constant time (that is: same time whatever input on same machine) ?
If one wants to be sure, same question could be asked to C users.
I wonder which kind of domain is the one with the most program lines created per year ?
Sure, the web is the most visible (tautology?), but how about all those embedded programs we do not even see , or do not even know they exist ?
(I read one day there are 4K of code within an electric shaver)
Note this is a true question, I do not have any clue.
But what I know, is that JS did nothing for me when writing firmware for various devices. Neither python, nor java, BTW.
Veteran Unix admin are used to the old adage: if it ain't broke, don't fix.
And BTW, Debian did a good job of increasing boot performance by replacing /bin/sh years ago: You can run a search of dash cs. bash
And BTW*2, boot frameworks including parallelism though using shell scripts exist
As long as platform does not get any new specific hardware gadget, is there is need for a change when all CPU/board specific stuff already is there ?
Using already cited example: Does something prevent last kernel version working on powerpc/amigaone ? (assuming of course that involved devices are still operational)
Please, avoid "ad hominem" argument.
What I said is what I can say after several years of working pieces of OS on several CPU architectures.
If you want performance, ISA matters, but at soon as the expected features are present, what matters is implementation and memory access (and I/O, if your data set needs it). Oh, and regarding SIMD, you should have a look at what last ARM ISA offers, you would discover why Fujitsu uses it for HPC beasts.
You may as well apply your last sentence to yourself, as it looks like you forgot some lesson from history.
That is:
x86 is born as a desktop CPU rather than a server one.
Regarding difference between desktop and server CPU, I already eared that: This is what company sending servers with their own CPUs said to the world when they started to feel the hot from (comparatively) cheap x86 server lines.
BTW, yes, there is a difference between desktop and server CPUs, but the instruction set is moot for that purpose.
Exactly
You could have:
ASSIGN 1000 to LABEL
GOTO LABEL => will GOTO 1000
But then something worst was added:
A way to pass labels as parameter for a subroutine
(that is: a function w/o return value),
with a specific syntax inside subroutine to select a label from parameter list:
Caller jumps there right after the return !
RT as Real Time, a new class of scheduling
You said new ? Hence I had to test that.
This allowed the whole team to discover that looping on decrementing an unsigned 32b integer from its maximum value to zero can take time (more then than now).
And of course, no way to kill process, as there was only one (single-core) CPU…
This makes me remember when I had to write a FORTRAN program for which natural data set was a small bag of stacks, but there was no way to create big array to contain biggest possible data (I know, because I tried and I got a "virtual memory full" error message).
Had to put stacks into files, and use windowing, to operate on stacks and just flushing/reloading when needed,