* Posts by Torben Mogensen

491 posts • joined 21 Sep 2006

Page:

RISC OS: 35-year-old original Arm operating system is alive and well

Torben Mogensen

Re: Some features i would like today

I would really like to see the file-types concept to be extended to cover also directory types. As it is, directories with names starting with ! are apps, but that would be better done as a directory type. I also recall that some word processors saved documents as apps, so pictures etc. were stored in separate files in the directory, but you could still open the document by clicking the icon. This, too, would be better handled by directory types, so the documents did not need names starting with !.

Torben Mogensen

RISC OS needs to be rewritten in a high-level language

RISC OS is mainly ARM assembler, and this means that it is locked to 32-bit ARM processors, while the rest of the world is moving to 64-bit processors. There are even ARM processors now that are 64-bit only, so they can't run 32-bit code.

Porting to 64-bit ARM assembler is just another dead end, so the OS needs to be rewritten to a high-level language with minimal assembly code. Making RISC OS into a Linux distribution that maintains as many features of RISC OS as possible might even be a reasonable choice.

Torben Mogensen

Re: Some features i would like today

I don't see file types as worse than three-letter file-name extensions. Even with file-name extensions, programs need to know what they mean -- you can't just create an new extension such as .l8r and expect programs to know what to do. The OS needs to assign default applications to each file-name extension, which isn't much different from assigning default applications to file types, as RISC OS does by setting OS variables (which can also assign icons to file types). And there is the same risk of the same file-name extension or type being used for different purposes, creating confusion. I have observed this several times with file-name extensions.

The main problem with RISC OS file types is that there are too few: 12 bits gives 4096 different types, where three alphanumeric characters (case insensitive) give 46656 different file-name extensions, which is over ten times as many. 32-bit file types would work in most cases, and picking a random type is unlikely to clash with other types. 64 bits would be even better.

Also, a lot of programs expect file-name extensions and take, for example, a .tex file and produces .aux, .log, .pdf and so on with the same root name. If these are compiled for RISC OS, the root name would be a directory containing files named tex, aux, log, and pdf (since RISC OS uses . as directory separator). In some cases, it is nice that all these files are combined in a single folder, but you often need to copy the pdf file to some other location, and then you would lose the root name, unless you rename the file. You could modify LaTeX and other programs to use file types instead, but that would be a major effort. The best solution is probably to have a wrapper for Linux programs that, in a temporary directory, modifies file type to file-name extensions (using subdirectories) and afterwards convert back to file types. This wrapper could be generic for all Linux command-line programs as long as the wrapper has a translation of all the file-name extensions and file types involved. Has one such ever been made?

The sad state of Linux desktop diversity: 21 environments, just 2 designs

Torben Mogensen

RISC OS

As the author indicated, RISC OS had (from around 1989) many features that didn't make it into Windows before Win95 (and some that no later system has). Apart from purely cosmetic things like marbled title and scroll bars (that by the size of the bar shows how large a fraction of the contents is shown, something that didn't make into competing systems until much later), RISC OS use a three-button mouse with the following functions:

Left ("select"): works mostly as the left button on most modern GUIs

Middle ("menu"): Pop-up menu activation

Right ("adjust"): does a variation of the left button, for example selecting a file icon without deselecting the already selected, selecting a window for input focus without sending it to the front, selecting a menu item without closing the menu, and so on.

This is something I miss in Linux GUIs.

It also had something like TrueType (but before this and IMO better, since it used cubic splines instead of quadratic splines and allowed anti-aliasing) which allowed all apps to share the same fonts. Printing was done by rendering pages using the same engine as screen rendering, so it was truly WYSIWYG (unlike Mac at the time). The only disadvantage was slow printing, as everything was sent as a bitmap (though you could print raw text using the printer's built-in fonts). But it made installing new printers quite easy: You just had to describe the commands for starting graphics mode and how pixels are sent to the printer. I had a printer that was not in the list of already-supported printers, and it took me less than half an hour to get it running.

Heresy: Hare programming language an alternative to C

Torben Mogensen

300 languages??

Already in 1966, Peter Landin wrote a paper called "The Next 700 Programming Languages" (read it if you haven't), citing an estimate from 1965 that 1,700 programming languages were in use at the time. I seriously doubt that this number has declined to 300 by 2022. Rather, I expect there are at least 17,000 languages by now. Granted, some of these are domain-specific languages used exclusively in-house by some companies or educational institutions or experimental languages under development (I would categorize Hare as one of these). But even the number of languages used for serious development in least a double-digit number of places would still exceed 300.

ZX Spectrum, the 8-bit home computer that turned Europe onto PCs, is 40

Torben Mogensen

Re: "Rival machines, such as the Commodore 64, did not suffer from the same problem"

I came to the comments to correct this misunderstanding, but you beat me to it.

The BBC micro was one of the few among the 8-bit crowd to have separate colour information for every pixel. It did use more memory for similar screen resolution (when using more than two colours), but was much easier to use.

Take this $15m and make us some ultra-energy-efficient superconductor chips, scientists told

Torben Mogensen

The theoretical limit

The article says that they might be "approaching the theoretical limit of energy efficiency". I assume they mean the Landauer limit, which puts a lower bound on dissipation when performing operations that lose information. For example, an AND gate has two inputs and one output, so it loses a bit over one bit of information on average (9/16 bits to be precise). But it is possible to go under this limit if you don't lose information, i.e., if all operations are reversible. In this case, there is no lower limit. And you can do surprisingly much with only reversible operations -- though you sometimes need to output extra "garbage" bits in addition to the result bits to preserve reversibility.

Still, the Landauer limit is several orders of magnitude under the current energy use of semiconductor gates, which also dissipate more in wires than in gates, so superconducting wires and gates will reduce the power dissipation quite radically. As for cooling, this is proportional to power dissipation, so while superconducting chips need to be cold, they don't tend to heat their environment very much, so in a well-insulated container, the cost of keeping them cold may not be too bad.

Why the Linux desktop is the best desktop

Torben Mogensen

Re: Linux "Desktop"

"Something must be compelling for so many businesses to use Microsoft Windows"

Inertia, mostly. If a company has used Windows for 20+ years, they need a very good reason to change, since change costs. So it is not so much a compelling reason to use Windows, it is lack of a compelling reason to use Linux.

Also, many companies use other Microsoft software than just Office and Outlook. For example, Sharepoint and Microsoft Dynamics. Or they may use legacy systems that only run on Windows servers.

But, yes, for the average office worker, Linux is fine. But the server software is harder to migrate.

Said by someone who is a daily Linux user and hasn't used Windows for years (except when trying to help other people who have problems with their Windows machines).

Any fool can write a language: It takes compilers to save the world

Torben Mogensen

Regarding = and ==, I fully agree that assignment and equality should use different operators. But I much prefer the Algol/Pascal choice of using = for equality and := for assignment. = is a well-established symbol for mathematical equality, whereas mathematics generally doesn't use assignment to mutable variables, so there is no a priori best symbol for this. So why use the standard equality symbol for assignment and something else for equality? I suspect to save a few characters, since assignment in C is more common than equality testing. But that is not really a good reason.

Note that I'm fine with using = for definition of constants and such, because this implies mathematical equality (and = is used in maths for defining values of named entities). Context is needed to see whether = is used to declare equality or to test for equality, but the context is usually pretty clear.

Torben Mogensen

Re: I miss a critical note and some figures.

Web Assembly (WASM) is better defined than GCC and LLVM (it even has a formal specification), and it is simpler too, so it makes an easier target language than GCC or LLVM intermediate representations. Unfortunately, WASM is intended for running in browsers, so it is sandboxed and requires an external program to handle i/o and everything else you would normally get from an OS, so it is not suited for all purposes. Also, the existing implementations of WASM are not super fast.

Speaking browsers, several languages use JavaScript as target language, allowing them to run in browsers. So much effort has been put into making the travesty called JavaScript run fast in browsers, that these languages can have competitive performance. But since JavaScript is very ill defined, this is not a good solution.

The article also fails to mention JVM and .NET as compiler targets. While farther from silicon than both the GCC and LLVM intermediate languages, they can provide decent performance and (more importantly) access to large libraries (which, unfortunately, in both cases require you to support the object models of Java and C#, respectively).

And while I agree that you need parallelism for compute-intensive applications, not all applications are that, so there is still plenty of room for languages and compilers that do not support parallelism.

The wild world of non-C operating systems

Torben Mogensen

Re: ARM Holdings Instruction Set Bias

The first versions of RISC OS used no C whatsoever. IIRC, a C compiler only came later. ARTHUR, the predecessor to RISC OS, did indeed write parts of the GUI in BBC BASIC, but when RISC OS replaced it, the GUI was all assembly language.

As for C influence on the ARM ISA, I doubt there was any. Yes, there are autoincrement/decrement loads and stores, but I believe these were inspired more by PDP11 and VAX, and were more general than the C ++/-- operators. Essentially, after you calculate an address (which can include a scaled register offset), you can write that back to the base address register.

Torben Mogensen

Re: RISC OS

RISC OS is an impressive bit of software, but the fact that it is written in 32-bit ARM assembler (which was a somewhat sensible decision back in the 1980s, when it was written) makes it a dead end. It can't even run on the 64-bit-only ARM variants, which are becoming increasingly common.

AFAIR, RISC OS was not Acorn's first go at an OS for their ARM-based machines. There was a project written in (IIRC) Modula2, but that was far from ready by the time the first Archimedes computer hardware was finished, so Acorn made ARTHUR (some jokingly said it was short for "ARM OS by Thursday"), which took many elements from the OS from the BBC Micro (which was written in 6502 assembly language) and added a very limited GUI. This was after a year or so replaced by RISC OS, which was clearly an extension of the code used in ARTHUR. After another year (about 1990, IIRC), this was upgraded to RISC OS 2, which had many advanced features that would not appear in Windows until Win95, and some that haven't made it there yet. At the time I loved it, but in spite of an active fan base, it will not have a long-term prospect unless rewritten in a high-level language that can be compiled to multiple platforms. Rust would be ideal, but it is probably hard to retrofit RISC OS to the Rust memory model. Maybe it is better to write a new OS from scratch that takes the best elements of RISC OS and adds support for new stuff such as multicore processing and UNICODE.

'We will not rest until the periodic table is exhausted' says Intel CEO on quest to keep Moore's Law alive

Torben Mogensen

Re: "We will not rest until the periodic table is exhausted"

Does Itanium count as a transuranic heavy element?

Torben Mogensen

Re: "two advanced chip factories in Arizona"

That virus is probably called "tax breaks".

Torben Mogensen

Re: "progressing along a trend line to 1 trillion transistors per device by 2030"

The reason clock speed hasn't increased in the last decade is because, while Moore's Law continues to hold, Dennard Scaling does not. Dennard Scaling is the notion that energy use is roughly proportional to chip area, so as transistors shrink, you can fit more in the same area without using more power -- or you can reduce the chip area to a quarter and double the clock rate with the same power usage (power usage is roughly proportional to the square of the clock rate). But Dennard scaling stopped around 2006, so to get higher clock rate, you needed to increase power use (and heat dissipation).

As a consequence, the trend shifted from increasing clock rate (with quadratic power usage) to having more cores per chip (with linear power usage) and shutting down cores (and functional units) not currently in use. This is why your laptop fan goes crazy under heavy load -- all cores are in use.

So the problem is not really shrinking -- that gives relatively little. The problem is power use, and that has somewhat been addressed with FinFETs and specialised transistors, but that will only take you so far. Other materials may help, as will superconducting transistors (but these are fiendishly difficult to combine into a complex circuit, as they interfere heavily with each other). Other potential solutions are asynchronous processors (so you don't have a global clock) and reversible gates (which have a lower theoretical power usage than traditional irreversible gates such as NAND and NOR). But these have yet to be realised on large scale.

Java 17 arrives with long-term support: What's new, and is it falling behind Kotlin?

Torben Mogensen

Pattern matching is not a big deal???

I can't see why the author of the article doesn't think pattern matching is a big deal. I use pattern matching in almost everything I code, though this is mostly in functional languages (ML, F#, Haskell), where the support is better than in Java. For example, functional languages support nested patterns and pattern-matching over base types such as integers. Oh, well, I suppose this will eventually make it into Java. Most features of functional languages already have. Like Bob Harper once said: "Over time, every language evolves to look more and more like Standard ML".

Glasgow firm fined £150k after half a million nuisance calls, spoofing phone number, using false trading names

Torben Mogensen

Peanuts

£150K for 500K calls is 30p per call. That's not a lot -- I'm sure they paid the caller staff more than that per call. Spam callers should be fined a lot more, otherwise fines are just part of the budgeted expenses.

What's that hurtling down the Bifröst? Node-based network fun with Yggdrasil 0.4

Torben Mogensen

What's with the Ös?

It may seem a bit metal to add random umlauts over Os (I blame Motörhead). But in Nordic languages it does, in fact, change the pronunciation, unlike in English, where an umlaut indicates vowels being pronounced separately rather than as a diphthong (as in naïve). Bifrost and Ragnarok definitely have O sounds, not Ö sounds.

Realizing this is getting out of hand, Coq mulls new name for programming language

Torben Mogensen

*Bleep*

Given the sounds that cover up four-letter words on TV, how about using the name *Bleep*? I'm sure a suitable backronym can be found. "p" could obviously stand for "prover", and "l" could stand for "logic", but the people behind the language would probably prefer French words. Any suggestions?

Blessed are the cryptographers, labelling them criminal enablers is just foolish

Torben Mogensen

Are banks criminal?

When i use my online bank services, I believe (and seriously hope) that all traffic is strongly encrypted. Does that make the banks criminal? (O.k., they may be, but for other reasons). What about the VPN I need to use to access my work server when not on the local network? What about using https instead of http? And so on.

If governments do not want us to use crypto, they should show an example and stop using it themselves, making all documents and communication public. Like that's ever gonna happen.

Ah, you know what? Keep your crappy space station, we're gonna try to make our own, Russia tells world

Torben Mogensen

Unmanned space station == satellite?

An unmanned orbital space station is just a satellite by another name. "Not permanently manned" could mean anything from short-term maintenance crews every five years to almost always manned, but given that it is stated that the reason for not having permanent manning is radiation, my guess is that it is closer to the first. Higher radiation probably means inside the inner Van Allen belt, which is lower than ISS. This would lower the cost, but require more frequent boosting to maintain orbit.

What's this about a muon experiment potentially upending Standard Model of physics? We speak to one of the scientists involved

Torben Mogensen

Connection to new Dark Matter model?

Researchers at the University of Copenhagen recently released a theoretical study where they replace Dark Energy with adding magnetic-like properties to Dark Matter. It would be interesting (though highly unlikely) if the observed muon magnetic anomaly was related to this.

Where did the water go on Mars? Maybe it's right under our noses: Up to 99% may still be in planet's crust

Torben Mogensen

Not really surprising

As the article states, water on Earth is recycled by volcanic activity and would otherwise not be found in any great quantity on the surface. This has been known long, so it is not really a surprise that the lack of volcanic activity on Mars has contributed to its loss of liquid water.

What is new is that measurements of H20 vs. d2O can give a (very rough) estimate of how much is lost underground compared to lost to space.

In any case, for those who dream of terraforming Mars, its low gravity and lack of volcanic activity will make it hard to sustain a viable biosphere without having to replenish it forever. In spite of its current unfriendly environment, I think Venus is a better long-term option for terraforming: Blow away most of the current atmosphere and add water. Redirecting comets from the Kuiper belt to hit Venus will contribute to both. Sure, we are a long way from being able to do that, but in the long run, it will make more sense.

Memo to scientists. Looking for intelligent life? Have you tried checking for worlds with a lot of industrial pollution?

Torben Mogensen

What would it do us of good to build and send such a missile if the other side has already launched one (or will so so before our missile arrives)? At best, satisfaction when we all die that we will be revenged, but that is a poor comfort.

And, in the event that such a missile misses its mark or is intercepted, we will have made an enemy that might otherwise have been an ally.

Interstellar distances are so large that invasion of another civilized planet is unrealistic. We can destroy one, yes, but invasion assumes that there is something worthwhile left to invade. And the amount of war material that it is realistic to bring across interstellar distances will be relatively easily countered by the defender, even if their level of technology is lower -- as long as they have orbital capability. Added to that, invasion is only really worthwhile if the goal is colonization -- sending goods back to the mother planet is too expensive to be worth it -- and sending a large number of colonizers across interstellar space is unrealistic. This is why invasion SciFi postulate hypothetical technologies such as FTL flight.

It might make sense to colonize extrasolar planets that have biospheres but no civilization. You can send frozen fertilised eggs there and let them be raised by robots until they grow up. This will in no way help Earth, but it can ensure long-term survival of the human species.

PayPal says developer productivity jumped 30% during the COVID-19 plague

Torben Mogensen

Meetings

I'm sure the main reason is that developers didn't waste so much time on useless meetings. At zoom meetings, they can code in a another window and only pay attention to the meeting in the 5% of the time something useful is said.

Useful quantum computers will be impossible without error correction. Good thing these folks are working on it

Torben Mogensen

"All we have to do is put them together"

That must be the understatement of the decade. Problems arise in quantum computer exactly when you put elements together. Each element may perform predictably on its own, but when you put them together, chaos ensues.

Arm at 30: From Cambridge to the world, one plucky British startup changed everything

Torben Mogensen

Re: Depends on what you mean by "reduced"

"The *real* point of RISC was that it worked round the memory bandwidth problem."

That too, but mostly the load-store architecture prevented a single instruction from generating multiple TLB lookups and multiple page faults. On a Vax, a single instruction could (IIRC) touch up to four unrelated addresses, which each could require a TLB lookup and each cause a page fault. In this respect x86 isn't all bad, as most instructions only touch one address each (though they may both load from and store to this address).

On the original ARM, a load/store multiple registers could cross a page boundary, which actually caused faulty behaviour on early models.

A load-store architecture requires more registers, which is why ARM had 16 registers from the start, which x86 only got in the 64-but version. In retrospect, letting one register double as the PC (a trick they goy from PDP-11) was probably a mistake, as it made the pipeline visible, which gave complications when this was lengthened (as it was in the StrongARM).

Torben Mogensen

Re: Depends on what you mean by "reduced"

"As you can imagine, decoding an "instruction" is a lot harder if you don't know how many bytes it contains until you've already begun decoding the first part!"

Even worse, you can't begin decoding the next instruction until you have done a substantial part of the decoding of the current instruction (to determine its size). Decoding the next N instructions in parallel is easy if they are all the same size, but difficult if they are not. You basically have to assume that all byte borders can be the start of an instruction and start decoding at all these, throwing away a lot of work when you later discover that these were not actual instruction prefixes. This costs a lot of energy, which is a limiting factor in CPU, and getting more so over time.

You CAN design multi-length instructions without this problem, for example by letting each 32-bit word either hold two 16-bit instructions or a single 32-bit instruction, so you can decode at every 32-bit boundary in parallel. But this is not the case for x86 because it has grown by bits and pieces over time, so it is a complete mess. So you need to do speculative deconding, most of which is discarded.

Torben Mogensen

You can't use the number of transistors to measure RISC vs. CISC. The majority of transistors in modern CPUs are used for cache, branch prediction, and other things that don't depend on the size or complexity of the instruction set.

Torben Mogensen

Who killed MIPS?

The article states that Arm killed of its RISC rival MIPS. I do not believe this to be true. IMO, it was Intel's Itanium project that killed MIPS: Silicon Graphics, which at the time had the rights to MIPS, stopped development of this to join the Itanium bandwagon, long before any hardware was available. Hewlett-Packard (which had their own PA-RISC architecture) did the same, as did Compaq, who had recently acquired the Alpha architecture from DEC. So, effectively, Itanium killed three of the four dominant server RISC architectures (the fourth being Sun's SPARC architecture, that was later acquired by Oracle), and that was solely based on wildly optimistic claims about future performance made by Intel. MIPS continued to exist as an independent company for some years, but never regained its position. It was eventually open-sourced and used as the basis of some Chinese mobile-phone processors, but these were, indeed, swamped by Arm. Itanium didn't affect Arm much, except that Intel stopped producing their StrongArm (acquired from DEC) and the successor XScale.

So, while Itanium itself was a colossal failure, it actually helped Intel gain dominance in the server market -- with x86 -- as it had eliminated potential competitors in the server market. Now, it seems Arm is beginning to make inroads on this market.

The evolution of C#: Lead designer describes modernization journey, breaks it down about getting func-y

Torben Mogensen

Functional C# == F#

If you like the .NET platform and C#, but want something more functional, you could try F#. F# is sort of a merge of OCaml and C#, having most of the features of both, but works best if you program mostly in a functional style. You can use all .NET libraries, and there are some F#-specific libraries that better support a functional style.

There are some places where having to support both functional and OO styles make things a bit ugly (for example two different syntaxes for exactly the same thing), but overall it is not a bad language. Not as elegant as Standard ML, though.

Torben Mogensen

"You can't take anything away"

While it is traditional to not remove features from a language to ensure full backwards compatibility, there are ways around it:

- You can make a tool that will transform programs using the deleted feature into some that don't. This can require a bit of manual work, but not too much. Of course, fully automatic is best.

- You can remove the feature from all future compilers, but keep supporting the last compiler that has the feature (without adding new features to this).

- Warn that it will be removed in X years, and then remove it, in the meantime letting compilers warn that the feature will disappear. People will then have the choice between modifying their programs or use old, non-supported compilers once the feature is gone.

- You can introduce a completely new language that is only library-compatible with the old, let the old stay unchanged forever, and suggest people move to use the new language. This is sort of what Apple did with Swift to replace Objective C.

Third event in 3 months, Apple. There better be some Arm-powered Macs this time

Torben Mogensen

Emulation

It used to be that emulation caused a ×10 slowdown or thereabouts because the emulator had to decode every instruction before executing it. These days, emulation is more like JVM systems: You compile code on the fly into a code cache, optimising the compiled code if it is executed a lot (and this optimisation can be done on separate cores). This can keep the slowdown down to around 20% on average and almost nothing on programs with very small intensive compute kernels. On top of this, calls to the OS run natively, so you are unlikely to feel a significant slowdown. The cost is more memory use (for the code cache) and more heat generation (as you use otherwise idle cores for compilation and optimisation of code).

I can even imagine that Apple has tweaked the x86 code generation in their compilers to avoid code that is difficult to cross-compile to ARM, such as non-aligned memory accesses. This will only have marginal impact on x86 performance (it might actually improve it), but it can have a significant impact on the performance of the generated ARM code.

Amazon blasts past estimates, triples profits to $6.2bn but says COVID will cost it $4bn over the next quarter

Torben Mogensen

COVID?

Only the headline mentions COVID, the article itself just says "employee safety". That can, of course, include COVID measures, but it probably includes all sorts of other measures.

And COVID is likely to make more people do their Black Friday and Christmas shopping online, so it will probably gain Amazon more than the $4bn that they claim to spend on employee safety, including COVID measures.

As a "Carthago delenda est" line, I will add that I think Amazon should be forced to split into at least two independent companies: One for online sale and one for streaming videos. It would not be difficult to argue that Amazon uses its dominant position in online shopping to do unfair competition towards Netflix and other streaming services, by combining a free shipping membership with their streaming services.

What will you do with your Raspberry Pi 4 this week? RISC it for a biscuit perhaps?

Torben Mogensen

Dead end?

Much as I like RISC OS (I had an Archimedes and an Acorn A5000, and used RISC OS on a RPC emulator for a while), I think it is painting itself into a corner from which it can not escape. It is still mainly written in 32-bit in ARM assembly code, and the world is moving to 64 bits -- it is even becming common that ARM processors are 64-bit only. And it can only use one core, where even tiny systems these days are multicore. Cooperative multi-tasking is also rather dated. There were good reasons for these design decisions in the late 1980s, but they do not fit modern computing. MacOS had similar issues in the 1980s, but when they moved to a platform based on BSD (from Steve Jobs' NEXT project), most of these problems were solved. Acorn died before it could do similar changes to its own platform, and attempts at moving RISC OS to a more modern kernel have been half-hearted -- there was a RISC OS style desktop called ROX for Linux, but it mainly copied the look and feel of RISC OS and didn't go very deep. And nothing seems to have happened with it for a long time.

So, I can't really see RISC OS moving out of a hobbyist niche and into anything approaching mainstream. Not without a so complete rewrite that it is debatable that you can call the result RISC OS anymore. It might be better to port some of the interesting parts of RISC OS (some of the apps, the app-as-a-folder idea, and the font manager) to Linux and let the rest die.

Heads up: From 2022, all new top-end Arm Cortex-A CPU cores for phones, slabtops will be 64-bit-only, snub 32-bit

Torben Mogensen

Makes sense

I loved the old 32-bit instruction set when I had my Archimedes and A5000 home computers, but over time the instruction set accumulated so much baggage that it became a mess. So I'm fine with a 64-bit only ARM. Nearly all modern applications use 64-bit only, so support for the 32-bit ISA is just extra silicon area that could better be used for something else.

Sure, it is a drag that future Raspberry Pis will not be able to run RISC OS, as this is (still) mainly 32-bity assembly code. But RISC OS, for all its qualities, will not amount to anything other than a hobby system until it is ported to a high-level language (such as Rust) and made more secure. Even as a past user of RISC OS, it was not the OS core that I loved -- it was higher-level details such as the GUI, the built-in apps, the font manager, the file system (with file types and applications as folders), and the easy-to-use graphics system. These could well be ported to a more modern OS kernel.

Ah yes, Sony, that major player in the smartphone space, has a new flagship inbound: The Xperia 5 II

Torben Mogensen

Lenses

I seriously doubt the tiny lenses used in smartphones are precise enough for true 8K video. So you could probably do just as well with intelligent upscaling of 4K or lower.

0ops. 1,OOO-plus parking fine refunds ordered after drivers typed 'O' instead of '0'

Torben Mogensen

ABC80

In the 1980s there was a Swedish-made home computer called ABC80. On this computer, the pixel patterns for O and 0 were EXACTLY the same. Since O and 0 are close on a keyboard, this could give hard-to-find errors when programming in BASIC. Is this a variable called "O" or the number 0? It didn't help that the designers had the bright idea that distinguishing integer constants from floating point constants, you added a "%" at the end of integer constants (similar to how integer variables were suffixed in most BASICs at the time). So O% and 0% were both valid. Variable names could only be a single letter or a single letter followed by a single digit (and suffixed with % or $ to indicate integer or string variables). All in all, this was not hugely user friendly. The follow-up ABC800 added a dot in the centre of zero, but the BASIC was otherwise the same.

I was the happy owner of a BBC Micro, but I was briefly hired by a company to port some school software to ABC80. The way it operated on strings used huge amounts of memory, so I had to add a small machine-code routine to make in-place updates (insert char, delete char, replace char) in strings to keep it from running out of memory.

Nvidia to acquire Arm for $40bn, promises to keep its licensing business alive

Torben Mogensen

Independence

The main reason ARM was spun off from Acorn Computers to become an independent company was that Apple (who wanted to use ARM in their Newton hand-held) did not want to be dependent on a competitor (however tiny). Having NVIDIA control ARM can lead to similar sentiments from ARM licensees that compete with NVIDIA.

I would prefer ARM to be neutral with no single instance (company or person) owning more than 20% of the company.

'A guy in a jetpack' seen flying at 3,000ft within few hundred yards of passenger jet landing at LA airport

Torben Mogensen

I wonder why commercial airplanes don't have cameras recording everything visible through the cockpit windows? That way, you could review all sightings after the plane lands. Such cameras would not need extensive testing, as they do not affect the flight (except by drawing a small amount of power, and even that could be replaced by batteries).

Toshiba formally and finally exits laptop business

Torben Mogensen

Libretto

I remember Toshiba best for their ultra-tiny laptops -- the Libretto range. I had a Libretto 50CT -- 210×115×34 mm with a screen smaller than many modern smart phones. A curiosity was that it had a mouse "nub" besides the screen and mouse buttons on the back of the lid. So you would use your thumb to move the cursor and index and middle fingers to operate the buttons. It was great for taking along when travelling.

See more at https://www.youtube.com/watch?v=7HQt6EwA0JE

A tale of mainframes and students being too clever by far

Torben Mogensen

Ray-tracing on a Vax

When I was doing my MSc thesis about ray-tracing in the mid 1980's, we didn't have very good colour screens or printers, so to get decent images, you had to use dithering, where different pixels were rounded differently to the available colours to create dot-patterns that average to the true colour. One such technique is called error distribution: When you round a pixel, you divide the rounding error by 4 and add this to the next pixel on the same row and the three adjacent pixels in the next row. This way, the colours in an area would average to the true colour.

I ran the ray-tracer program (written in C) on the department Vax computer, but I had an annoying problem: At some seemingly random place on the image, a "shadow" would appear making all the pixels below and to the right of this be really odd colours. I looked at my code and could see nothing wrong, so I ran the program again (this would take half an hour, so you didn't just run again without looking carefully at the code). The problem re-appeared, but at a different point in the image! Since I didn't use any randomness, I suspected a hardware fault, but I needed more evidence to convince other people of this. I used a debugger and found that, occasionally, multiplying two FP numbers would give the wrong result. The cause of the shadow was that one colour value was ridiculously high, so even after distributing the error to neighbouring pixels, these would also be ridiculously high, and so on.

To make a convincing case, I wrote a short C program that looped the following:

1. Create two pseudo-random numbers A and B.

2. C = A*B; D=A*B;

3. if (C != D) print A, B, C, and D and stop program.

This program would, on average, stop and print out different values for C and D after one or two minutes of running (but never with the same two numbers), and this convinced the operators that there was something wrong, and they contacted DEC. They sent out some engineers, and they found out that there was a timing problem where the CPU sometimes would fetch the result of the multiplication from the FPU slightly before it was ready, so by increasing the delay slightly, they got rid of the problem.

Apple to keep Intel at Arm's length: macOS shifts from x86 to homegrown common CPU arch, will run iOS apps

Torben Mogensen

Acorn welcoming Apple to the RISC club

When the PPC-based Macs first came out in 1994, Apple claimed to be the first to use a RISC processor for a personal computer. Acorn, that had done so since 1987, ran an ad welcoming Apple to the club (while pushing their RISC PCs).

If Acorn still existed, they could run a similar add welcoming Apple into the ARM-based PC club.

Torben Mogensen

Re: RIP Hackintosh

"I do feel like Apple are consistently missing a trick here - not everyone can afford to buy their expensive hardware"

Apple is primarily a hardware company, and they actively oppose running their software on other hardware, as the software is mainly there to sell the hardware. Same reason they don't license iOS to other phone makers.

Torben Mogensen

"The only ones with a problem are people buying a Mac exclusively to run Windows."

That used to be a thing, when Macs were known for superior hardware and design, but these days you can get Wintel laptops with similar design and build quality for less than equivalent Macs. So while a few people may still do it, it is not as widespread as it was 15 years ago. It is certainly not enough to make a dent in Apple's earnings if they switch to other brands.

Besides, if you have already bought a Mac to run Windows, you should not have any problems: Windows will continue to run just fine. Just don't buy one of the new ARM-based Macs to run Windows, buy Asus, Lenovo, or some of the other better PC brands.

Torben Mogensen

Re: Keyword here is "maintained"

"Compare Apple's approach to Windows and the differences are clear: Windows runs code from 30 years ago, and likely will continue to run it unchanged."

30 year old (or even 10 year old) software should run fast enough even with a naive cross-compilation to modern ARM CPUs. I occasionally run really old PC games on my ARM-based phone using DosBox, and that AFAIK uses emulation rather than JIT compilation.

And running really old Windows programs (XP or earlier) is not really that easy on Windows 10.

Torben Mogensen

Re: Rosetta

One of the things that made Rosetta work (and will do so for Rosetta 2) is that OS calls were translated to call natively compiled OS functions rather than emulating OS code using the old instruction set. These days, many apps use dynamically loaded shared libraries, and these can be precompiled so apps spend most of their time in precompiled code, even if the apps themselves are written in x86. Also, with multicore CPUs, JIT translation can be done on some cores while other cores execute already-compiled code.

But the main advantage is that nearly all software is written high-level languages, so they can essentially be ported with just a recompilation. The main thing that hinders this is that programs in low-level languages like C (and Objective C) may make assumptions about memory alignment and layout that may not be preserved when recompiling to another platform. Swift is probably less problematical in this respect. And these days you don't need to buy a new CD to get a recompiled program from the vendor -- online updates can do this for you. So moving to an ARM-based Mac is much less of a hassle for the user than the move from PPC to x86 (and the earlier move from 68K to PPC).

Moore's Law is deader than corduroy bell bottoms. But with a bit of smart coding it's not the end of the road

Torben Mogensen

Re: Dennard scaling

Video compression is not really highly serial. The cosine transforms (or similar) used in video compression are easily parallelised. It is true that there are problems that are inherently sequential, but not as many as people normally think, and many of those that are are not very compute intensive. It is, however, true that not all parallel algorithms are suite for vector parallelism, so we should supplement graphics processors (SIMD parallelism) with multi-cores (MIMD parallelism), but even here we can gain a lot of parallelism by using many simple cores instead of few complex cores.

But, in the end, we will have to accept that there are some problems that just take very long time to solve, no matter the progress in computer technology.

Torben Mogensen

Dennard scaling

The main limiter for performance these days is not the number of transistors per cm², it is the amount of power drawn per cm². Dennard scaling (formulated in the 1970s, IIRC) stated that this would remain roughly constant as transistors shrinks, so you could get more and more active transistors operating at a higher clock rate for the same power budget as transistors shrink. This stopped a bit more than a decade ago: Transistors now use approximately the same power as they shrink, so with the same amount of transistors at smaller areas you get higher temperatures, which requires more fancy cooling, which requires more power. This is the main reason CPU manufacturers stopped doubling the clock rate every two years (it has been pretty much constant at around 3GHz for laptop CPUs for the last decade). To get more compute power, the strategy is instead to have multiple cores rather than faster single cores, and now the trend is to move compute-intensive tasks to graphics processors, which are essentially a huge amount of very simple cores (each using several orders of magnitude fewer transistors than a CPU core).

So, if you want to do something that will benefit a large number of programs, you should exploit parallelism better, in particular the kind of parallelism that you get on graphics processors (vector parallelism). Traditional programming languages (C, Java, Python, etc.) do not support this well (and even Fortran, which does to some extent, requires very careful programming to do so), so the current approach is to use libraries of carefully coded code in OpenCL or CUDA and call these from, say, Python, so few programmers would even have to worry about parallelism. This works well as long as people use linear algebra (such as matrix multiplication) and a few other standard algorithms, but it will not work if you need new algorithms -- few programmers are trained to use OpenCL or CUDA, and using these effectively is very, very difficult. And expecting compilers for C, Java, Python etc. to automatically parallelize code is naive, so we need languages that from the start are designed for parallelism and do not add features unless the compiler knows how to generate parallel code for these. Such languages will require more training to use than Python or C, but far less than OpenCL or CUDA. Not all code will be written in these languages, but the compute-intensive parts will, while things such as business logic and GUI stuff will be written in more traditional languages. See Futhark.org for an example of such a language.

On a longer term, we need to look at low-power hardware design, maybe even going to reversible logic (which, unlike irreversible logic, has no theoretical lower bound of power use per logic operation).

ALGOL 60 at 60: The greatest computer language you've never used and grandaddy of the programming family tree

Torben Mogensen

The influence of ALGOL 60

You can see the influence of ALGOL 60 on modern languages most clearly if you compare programs written in 1960 versions of ALGOL, FORTRAN, COBOL, and LISP (which were the most widespread languages at the time). The ALGOL 60 program will (for the most part) be readily readable by someone who has learned C, Java, or C# and nothing else. Understanding the FORTRAN, COBOL, or (in particular) LISP programs would require a good deal of explanation, but understanding the ALGOL 60 program would mainly be realising that begin, end, and := correspond to curly braces and = in C, Java, and C#. Look, for example, at the Absmax procedure at https://en.wikipedia.org/wiki/ALGOL_60#Examples_and_portability_issues

FORTRAN and COBOL continued to evolve into something completely unlike their 1960 forms while still retaining their names -- even up to today. ALGOL mainly evolved into languages with different names, such as Pascal, C, CPL, Simula and many others. So ALGOL is not really more of a dead language than FORTRAN II and COBOL 60. There was a computer scientist in the late 60s that was asked "What will programming languages look like in 2000?". He answered "I don't know, but I'm pretty sure one of them will be called FORTRAN". This was a pretty good prediction, as Fortran (only name change is dropping the all caps) still exists, but looks nothing like FORTRAN 66, which was the dominant version at the time. You can argue that the modern versions of FORTRAN and COBOL owe more to ALGOL 60 than they do to the 1960 versions of themselves.

Page:

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2022