Re: At my first job back in the 90s
Maybe he understood that even boards with static protection became less reliable after every shock and would eventually fail..
So *you* probably never saw a problem. Someone else did...
23 publicly visible posts • joined 29 Sep 2020
Indeed, but Softbank had better hurry up.
The original ARM shareholders - including founders - cashed out in 2016, 2 years after RISC-V was publicised. I'm sure they couldn't believe their luck that it took decades for a decent open and royalty-free ISA to appear. That was a good call.
Softbank didn't see RV coming in 2016 and I'm sure that pre-takeover ARM didn't make a point of worrying Mr Son about a few hippies in Berkeley. Now the only way for Softbank to cash out is selling EITHER to a) someone (e.g. NVIDIA) who knows all about RV and wants decent core designs and designers OR b) who will be so dim as to not realise that RV threatens ARM's business model, i.e. "licensing the ISA", not "designing decent cores" (or may be quite savvy, but confident of finding a greater fool to take it off their hands).
Tried a), that failed. Looks like b). Best of luck, there.
If ARM can keep designing decent cores, regardless of ISA, then it has a future and I'm sure the Cambridge developers know that. But, if the accountants and lawyers are in command, they'll continue the current business model, the only one they understand. Apple are perfectly capable of changing ISA again with almost no notice (they're very good at it); they've paid for the expensive whole chip design and I'm sure they're learning how the decent cores work and will dispense with ARM's ISA as soon as they find it convenient. I'd suggest that the high-end phones will be RV well within 10 years; so will the cheap ones. Don't be surprised if Apple start licensing decent core designs to everyone else.
Really.
I was at Terence Eden's talk at EMF 2018..
https://cdn.media.ccc.de/events/emf/2018/h264-hd/emf2018-1-eng-The_Connected_House_of_Horrors_hd.mp4
And, because I can already hear the penetration testing jokes lumbering over the hill...
https://www.pentestpartners.com/security-blog/we-need-to-talk-about-sex-toys-and-cyber-security/
We just bought a big OLED monitor.
Lovely picture, completely dumb.
I plug my 2015-era laptop in to the HDMI; 1080p works nicely over our bargain basement ADSL.
When the chip famine abates (I'm on 4 reserve lists), I'll add an RPi 4 with a DVB-T2 dongle.
It won't phone home, we'll use the software we want (Kodi?) and upgrade it as required.
Our house - built ~1990* - is under 300m from the exchange (exactly 300 paces and my paces are well under 1m). The nearest Openreach FTTC cabinet is on the arterial road past the estate - the other side of the exchange, at least 450m away - and a Gigaclear fibre has been making its stately way down that road for some time, escorted by a retinue of temporary traffic lights.
ADSL is all we need, thanks - the service is great** and getting better as it seems the data hogs are moving to fibre and leaving the rest of us with minimal contention on the DSL. If we really need more bandwidth, then hopefully we can just move to VDSL straight into the exchange (the passing road traffic has a habit of running into the local pavement cabinets).
So don't worry about us, concentrate on taking fibre to those way out in the countryside who have had a rubbish service for years and could really do with it. When FTTP is ubiquitous and cheap we'll take it - but until then the only way they're taking our copper away is from our cold dead hands...
* The period when house builders were installing modular phone wiring, but were not required to use actual BT sockets, thus causing the furrowed brows of several Openreach staff. And yes, the estate should obviously have been laid with fibre from the start.
** Well, it is now - ADSL used to drop out when the weather was very dry or very wet, with a buzz audible on a clear line test (drowning out the dialtone on a really bad day). When we finally started insisting on a fix to what was obviously a dry joint, the first Openreach guy came (by which time the rain had stopped) plugged in a tester, said "it's working" and left. As did the second. The third one listened, nodded, said "I can't really check properly without a proper socket", fitted one free and then walked the line right back to the exchange re-making every connection on the way. It's worked perfectly ever since in all weather. One hopes that guy got a bonus, but I suspect he actually got a bollocking. To all the Openreach staff who actually understand what a dry joint is and how to wield a TDR - I salute you, but there's probably only three of you left.
Which reminds me...
In the early 90s I worked at a university computing service; I had just set up one of the first campus Athena services in the UK with Kerberos (56 bit des crypto, and you still needed a munitions license from the US DoD) - which made me the local crypto guru. As I recall, even PGP wasn't really a thing yet.
My boss appear in my office looking worried - the head of finance had just been in touch: a supplier had just sent a contract with a digital signature and the finance staff didn't know how to verify it. Could I talk to him?
Gosh. I had no idea that commerce was this advanced! I asked my colleague who supported the finance and admin staff if she knew anything and she's never heard of this either!
How had this contract even arrived, given that finance were only just in the era of email? Boss didn't know and so it was decided that all three of us needed to deal with this. An hour or so later, we were ushered in to an impressively large office.
Could we see this contract please?
Head of finance looked very serious and showed us the last page a wodge of A4 paper, clearly the signature section of a long contract; the supplier had typed* "FRED BLOGGS" in the appropriate box.
There was a very obvious fax header.
After a couple of seconds silence, I heard myself saying "er... this is a fax".
The head of finance looked at me as if I was a complete idiot and replied "Yes. It's a digital fax machine."
We managed to persuade him that: a) it was still just a fax; b) the typed "signature" was probably perfectly fine, but questions on this would be best directed to a lawyer; c) "digital signature" meant something else.
Straight faces were maintained all the way downstairs; we didn't crease up laughing until we were outside. Even the boss managed a smile and then told us to never say anything about this. Humiliating the head of finance is just never a good idea in any organisation...
* And yes, I mean typed. With a typewriter.
In 1999 I worked with a guy who was part time - the rest of his time, apparently, being quite highly paid to maintain software running on some elderly IBM hardware in a couple of South African banks.
He assured me that several banks in British and Commonwealth countries still processed the accounts in Lsd, converting everything from and back to decimal on the fly. I though this was a wind-up - but he never seemed to be the kind who made that sort of joke. Ever since, whenever I have asked anyone who might know more about this, the answer has always been along the lines of "hmm, maybe..."; never has anyone laughed and told me not to believe that silly urban myth.
This - and other things I've learned over the years about that kind of environment - suggest to me that (a) the story is quite plausible and, getting back to the original topic (b) keeping this going was much more lucrative for the consultants rather than converting to a saner system.
Back in 2000/2001, when out in the Bay Area, I recall being told that MS were in the habit of making small changes to SMB which clearly had no functional purpose.
The person who told me that was one Jeremy Allison. :-)
Well, it's a couple of decades ago and my memory's not perfect, but I definitely understood that while there may have been no overt MS policy to break Samba, doing so was certainly not discouraged and the most plausible explanation was that MS were probing how fast the Samba team reacted to these changes. The answer being, of course, "immediately".
The coal fired part of Didcot A shut in 2013 and is now demolished, but the gas turbine was retained as a backup and can still supply the grid.
The 1997 gas turbine at Didcot B is still running.
Didcot is very handy for all the compute needs of Oxford (just north), Culham & Milton Park (next door) and Harwell, including Rutherford Appleton Lab and Diamond Light Source (just south).
It's usefully close to the Reading/Bracknell/Basingstoke area, not forgetting Aldermaston.
There are many tech-qualified people in the area; while Didcot / Sutton Courtney has recently spawned a huge amount of housing for tech and non-tech staff.
If I was setting up a big data centre, Didcot is *exactly* where I'd put it.
You could, with enough power and will, make it effectively impossible for me to gain access to a pig, dead or alive.
You're going to find it harder to deny me access to AES, given that it's widely documented and even the cheap laptop I'm typing on has hardware support. I hear you argue that Reg commentards are not "the general public"; fair enough but I'm certainly happy to be one of the folks who continues to make e2e available to everyone with traffic disguised as quotes from the collected speeches of Joe Biden* with plausible deniability and forward secrecy built in.
Way back when PGP was young, using it conveniently notified NSA/GCHQ that your email** was of interest to them. The spooks now want widespread encryption for security, as long as it is back-doored (or they have access to the in-clear state via Google, FB etc) and that does make it harder for them to work out what might be "legal" vs "non-legal" payload of your traffic, especially if it can plausibly be decrypted as pictures of playful kittens or (legal) porn. If this is needed, it will be developed and widely used. I'll agree that that won't be used by most of the general public most of the time, but those that will want it will really want it; some will use it much the time, many will use it some of the time.
Like others here I'm actively working on keeping my own stuff under my control.*** This is all a bit Farenheit 451, but keeping the knowledge in our heads is prudent. Post-Snowden, reporters now know that decent e2e software is a thing, which helps.
* Greybeards out there in the Regiverse will remember that someone tweaked an algorithm thus in an attempt to demonstrate to then-senator Biden that this crypto thing was a bit harder than he might imagine.
** All your email thereafter, not just the pgp stuff...
*** Not too much of problem admitting this here as, obviously, anyone reading El Reg is suspect anyway.
As well as refuge from the masses, the Weston Library also has central Oxford's most well appointed, spacious and crowd-free public* lavatories. Not an unimportant matter just at the moment, if Reg readers fancy seeing the sights of Oxford.
*OK, they're technically for the Weston café, but no-one will stop you.
All the above seconded - I moved email to MB last year from gmail and an org.uk domain to MB from 123reg. Should have done it years ago. Everything works well, there's no fuss and even their web front end is pleasant to use and free of the usual clutter.
I'm a very happy customer of MB and - on behalf of my two .org.uk domains - very happy to register my happiness that they support Public Benefit.
Moved our personal domain from 123-reg to Mythic Beasts a couple of years back, then migrated email from Gmail to MB. Ashamed I didn't do this a decade ago.
It costs small amount more, but absolutely worth it for excellent service.
No association with Mythic Beasts other than being a happy and relaxed customer.
+1
We tried the pepper. Utter failure.
I'm convinced the squirrels got a taste for it, they seemed to invite their family round for Spicy Seed Curry.
We eventually got a Squirrel Buster, which really has defeated the squirrels. I'd recommend them (unless you have parakeets, in which case all bets are off).
But, to get back on topic, a pair of magpies are now working out how to defeat it.
It's fascinating to watch; they're getting better every time... you have to respect them.
The x86 CISC architecture is effectively byte code interpreted by the "Core" RISC processor under the covers, so I'm sure Intel put in sufficient physical registers to cope. :-)
The important point is that RISC was the right development for several reasons, not just the "fast decoding" meme which seems to dominate "Why RISC?" explanations. The acronym is usually read to mean a reduced set of instructions - and while designs like RISC-V are minimal (noting Dave Paterson's idea that you can tell it's RISC when a booklet of opcodes requires no staple), one can also read it as a set of reduced instructions - which become practical to use when optimisers can emit efficient code.
As a greybeard FORTRAN wrangler explained to me, way back when...
The *real* point of RISC was that it worked round the memory bandwidth problem.
Processor speed and memory size were increasing exponentially but memory bandwidth could never keep up. A simulation which ran acceptably fast in 1986 could, by 1988, run on a machine with 2x clock speed and 2x memory, so one might innocently assume that by '88 a model 2x as detailed could complete in the same of time, yes? Except no, because the memory bus would also need to be 2x faster (that's bandwidth AND latency). You'd be doing well if your new system's memory bus was any faster at all than a couple of years ago; 20% improvement was quite something. So a CISC op directly on memory might have been: 20 cycles waiting for the read, one cycle processing, 20 cycles waiting for the write. Before long it would be 40 cycles read, one compute, 40 cycles write...
In RISC architectures, logic / maths ops are all register to register, with separate load / store ops to transfer between registers and memory. RISC processors did indeed decode instructions faster, but by removing the fancy addressing modes and building in many more (usually 16) registers efficient code could read from memory into registers - and do useful work while waiting - then run a sequence of register to register ops on what you just read in, finally writing out the result while getting on something else again, which kept processor and memory as busy as possible.
Unfortunately for a generation of assembly programmers, interleaving the ops effectively and reliably was much harder than CISC. My mid 80s CS degree used M68K and my first job was VAX assembly; both had easy-to-use ops with all sorts of convenient addressing modes and both were clearly Not The Future. To use RISC properly, you need optimising compilers to munge FORTRAN (all the compute bods are ever bothered about) into performant code.
It's at least a way to educate MPs that a problem exists - so if anyone does sign it, might as well fire off an email to your MP to tell them.
Yes, I know MPs have no technical knowledge but bear in mind that - with honourable exceptions - most of them (and definitely most ministers or shadows) don't know much about any of the topics they actual deal with.
Technically there's no problem. Transmeta wanted to do this way back when with (x86 and Power, IIRC). Looks like Tachyum are having a go; I'd be interested to know how much fun they're having with the licensing lawyers at ARM and Intel.
https://www.eenewsembedded.com/news/x86-arm-and-risc-v-software-running-tachyum-prodigy
Or how about Loongson: https://venturebeat.com/2015/09/03/chinas-loongson-makes-a-64-bit-mips-processor-that-can-run-x86-and-arm-code/
From the article: "It’s not clear how Loongson can legally do the x86 emulation, as Intel owns the x86 architecture and licenses very few chip makers". Looking at the Loongson Wikipedia page, it seems what they're actually doing is hardware acceleration for QEMU, along the lines of RISC-V's J extension aimed at Java and Javascript. That seems much more plausible: generic hardware acceleration for the subset of commonly used instruction patterns which can really use it; translate or interpret the remainder in software.
I've though the same for a lot longer, though I have no archived blog posts to prove anything. ESR is not particularly prescient.
Actually, I think the more useful of your old posts is this one...
https://web.archive.org/web/20180823153529/http://www.blitterandtwisted.com/2016/11/why-windows-will-move-to-a-linux-kernel.html
And, er, yeah, agreed - though I came from a different point of view. The progression to Windows-on-Linux-kernel was clear as soon as the Gates/Ballmer era ended and grown-ups took over. Satya and co watched Google destroy Windows phone with a proprietary user space on top of the Linux kernel. They learned from the pain of Vista and the arm port, and I would personally bet that there's been a functional Windows-on-Linux test rig running in M$ labs for several years now. It'll be very secret but only because the markets might find the idea a bit scary without a *very* carefully crafted announcement, not to guard any particularly secret sauce.
Anyone following RISC-V will note that no RV hardware firms (think NVIDIA) are moaning that a Windows port is needed - unlike the 90s when lack of Windows helped* to take down both alpha and itanium and lumber us with x86 for the next quarter century. RISC-V folk all agree that Linux is vital and if M$ aren't worried, that's because an official Windows userspace will be running on Linux kernels by the time RV64G kit actually hits the streets.
* Yeah, I know there were lots of other reasons, but Not Running Windows was a big one. And no, Windows never worked usefully on alpha