* Posts by Torben Mogensen

540 publicly visible posts • joined 21 Sep 2006

Page:

RISC OS Open plots great escape from 32-bit purgatory

Torben Mogensen

Small suggestions for improvements

There are a number of fairly simple things that can be done to improve the file system:

- Increase the number of file types to a large number of bits, so choosing a random number as a file type is unlikely to clash with existing file types.

- Make some of the file types be folder types. Basically, different folder types would have different actions to open them, so you could avoid having application folders start with !, and Impression documents and similar would just use different folder types that indicate which application should be used to open/show/edit their folders. You could also have sandboxed folders where applications and commands started from within these folders have no or read-only access to files outside the folders. Browsers could be made to run from such. Some folder types could be encrypted, so you would need a key to access them.

Torben Mogensen

Re: Security?

There are several things that could be done:

- Limit what applications can do before the folder is clicked, i.e., when it is viewed by the OS. The purpose is mainly to set file type actions and icons, so maybe limit to that?

- Make a sandbox folder type that allow programs running from inside this only to access files in the sandbox. Maybe as a virtual file system. Could have two types: all access local to sandbox, write access only to sandbox, read from anywhere. Browsers would typically run from such a folder. Users are free to move files to and from these folders.

- Require modules to be digitally signed.

- Require administrator rights for certain actions. Could be done with a sudo-like command that requries a password.

Torben Mogensen

Security?

RISC OS was designed for storing in ROM, so there was little concern about viruses and other attacks -- you would just restart the machine to get rid of bad stuff. At least until hard disks became common, and even then the problem was limited. And in other ways it trusted applications not to be malicious: As soon as an application folder is visible in the current directory, the OS executes initialisation code for setting icons and such. But the initialisation can execute any shell command, which is a potential hazard.

But times are different now, so it might be time to consider improving the robustness of RISC OS from malicious attacks.

Torben Mogensen

The 26-bit PC version of ARM used the two least significant bits to store the processor mode: User, Interrupt, Fast interrupt, and Supervisor. The upper four bits were used to store condition flags (N, Z, C, and V) and the next two bits were interrupt request disable and fast interrupt request disable.

The 32-bit version moved these flags into a separate processor status register which, unlike the PC, was not a numbered register. Load/store multiple registers instructions were given an extra bit to load and store the PSR along with the PC (if this is in the list of registers, otherwise it specifies that the loaded or stored registers are user-mode registers).

EU OS drafts a locked-down Linux blueprint for Eurocrats

Torben Mogensen

Monocultures are vulnerable

While I applaud the effort to move away from the Microsoft monoculture, I don't think the solution is to move to another monoculture, even if this is not based in the US. Monocultures are vulnerable in the sense that if you find a weakness, it applies to all individuals, just like a biological monoculture is vulnerable to disease. Variety gives robustness and a better chance of herd immunity.

So I suggest that a solution is not based on a single CPU, OS, desktop or application suite, but a variety of these. Document formats and communication protocols would need to be standardised to allow the different machines to cooperate, but we already see a lot of that with machines running Windows, Linux, MacOS, Android, iOS, FreeBSD, etc. working together through open protocols and document formats. And we have dozens of browsers that use the same Internet standards.

Sure, the protocols and document formats may be vulnerable too, but if document formats can not contain executable code that can affect the file system, these are reasonably safe (I remember a time when just opening a Word document could install a virus). And protocols are some of the things that have been studied intensely to uncover weaknesses or even formally prove that they do not exist.

Time to make C the COBOL of this century

Torben Mogensen

Problems with C

As earlier mentioned, C was designed to write systems code for PDP11 and related computers. That is all very well, and C is not a bad choice for writing device drivers. The problem is that it is used for so much else, probably because cheap or free C compilers became available before ditto for other languages. Nowadays, compilers are pretty much universally free.

Some people have mentioned that a failing of C is that its support for data-parallel programming is limited (because that wasn't around on PDP11 and Vax), but IMO that is not the worst problem it inherited from the 1970's machine model. The worst problem is that C is designed around a single, flat memory space that you access through pointers that can be converted to and from integers. This is where most of the security issues originate: You can easily address outside the range of arrays and other objects because addresses are not checked to be within the range of these objects. Often, it is not possible to do so, as there is no information about the size available at compile time or runtime. For example, to find the size of a string, you look for a zero byte. But it is easy to overwrite this with another value, and then the string can look like it is much larger, and trying to find its size can mean accessing addresses that are not mapped to real memory, so you get access violation errors. It is not better with arrays, as their sizes are not stored anywhere (by default), so adding index checks is pretty much impossible -- at best, you can check that the address points to valid memory. Sure, it is possible to define a "fat pointer" type that in addition to the actual pointer value also contains the first and last valid address (or equivalent) of the object. But this means that every operation on these must be through library functions, which is cumbersome and also not checked -- nothing prevents you from messing with the fields of fat pointers. Manual memory management is also unsafe, as (even with fat pointers), you can access objects that have been freed and whose memory is used for something else, and if you conservatively don't free objects to avoid this, you are likely to get space leaks. Also, because pointers can be converted to integers and back, a memory manager can not move heap-allocated objects to close gaps, so you get fragmentation. Adding a conservative garbage collector can prevent most cases of premature freeing and some cases of space leaks, but it doesn't prevent fragmentation.

It IS possible to code safely in C, by following strict coding practices, but since these practices can not be enforced or checked, this is a weak promise.

IMO, pointers should be abstract objects that can not be converted to integers (or anything else for that matter) or vice-versa. You can create pointers by allocating an object, you can follow a pointer with an offset that is verified to be less than the size of the object (at compile time or run time), and you can split an object into two. Joining requires that objects are adjacent, which is a property that should not be visible, so you shouldn't be able to do that. You can even free the object when the pointer variable goes out of scope (otherwise, you will have a dangling pointer), but only if there are no current copies of the pointer. Yes, some of that sounds a lot like the Rust rules.

But in many cases you don't even need all of these capabilities -- you might not need to be able to split or explicitly free objects. This is the case in most languages with automatic memory management.

Some object to these restrictions because they are restrictions. And some because they impact performance. But the performance impact is usually minimal, and because you expose less information about objects to the programmer, the compiler or memory manager may be able to perform optimisations that they can not do if, say, pointers can be converted to integers and vice-versa. And a programmer should be able to work with restrictions in the same way that electricians have to follow safety standards for electrical installations.

Torben Mogensen

Replacement for COBOL

For finance, COBOL has the advantage of using fixed-point arithmetic with user-specified numbers of digits before and after the point. In some places, there are laws that specify how to round amounts to a specified precision, and COBOL often supports these requirements.

But other languages can encode fixed-point arithmetic in libraries, and when writing to databases you often convert integers to strings anyway, so you can get around this in many languages.

Some financial institutions have moved from COBOL to OCaml or F#, as the functional programming style fits well to the domain. Some even use APL. So it's not like there aren't any alternatives. But many financial institutions use mainframes for high transaction throughput, and not many languages are supported by mainframes. COBOL is. On IBM mainframes also PL/I (of course), Java, and C/C++. But the choice is usually quite limited.

Some people have claimed that COBOL is readable by non-programmers. This is true only to a limited extent. The original COBOL was mostly readable until it came to control-flow primitives, and there are cases where the COBOL meaning of words is subtly different from their English meaning, which can cause confusion. And there is a long step from reading code to writing code.

The latest language in the GNU Compiler Collection: Algol-68

Torben Mogensen

Re: Lead to a bunch of stuff at what was RSRE Malvern

"Wirth went on to knock Pascal together as a rush job, intentionally breaking the declaration syntax to make it incompatible with ALGOL."

The change was not made only to be incompatible, but used the variable : type notation used in type theory. In ALGOL-style notation (later adopted by C), you write type variable, but if the variable is an array, you write that after the variable name, e.g. integer x[100]. That means that part of the type of x comes before the name, and another part after the name. In Pascal, you write x : array[0..99] of integer, keeping all of the type to the right of the colon.

Admittedly, Wirth's notation is rather verbose (Pascal generally is). You could shorten it to x : integer[0..99] while keeping all of the type to the right of the colon.

How a good business deal made us underestimate BASIC

Torben Mogensen

My first BASIC

was called "RC BASIC", where RC was short for Regnecentralen, a Danish computer company. RC BASIC ran on my high school's RC7000computer, which was a rebadged Data General Nova (with ferrite memory, no screen, so all interaction was through a paper teletype terminal). RC BASIC was actually just Regnecentralen's version of COMAL, a structured BASIC supporting while and repeat loops and named procedures/functions with parameters and local variables much like the later BBC BASIC. Line numbers were optional in COMAL.

The next BASIC I learned was on a Commodore PET which one of my friends bought. This was vastly inferior. Not only did it lack structured statements, variable names were limited to two significant characters. But it had a screen and limited block graphics, so it was fun to play with. I did some professional BASIC programming, first for a CPM machine and later for the Swedish ABC80 home computer. Their BASIC versions were not significantly better than Commodore BASIC, though.

The last BASIC I used in a significant way was BBC BASIC. First on a BBC Computer that I bought shortly after its release (a group of friends imported a number from England, as there was no Danish retailer). My friend with the PET sold this and bought a BBC after he saw how superior it was. Next, I bought an Archimedes, which also used BBC BASIC. After this, I haven't used BASIC much in any version.

Torben Mogensen

C16 and Plus4

Commodore tried to replace the VIC-20 and C64 with the C16 and Plus4 computers. These had much better BASIC and graphics, but they couldn't run C64 games, so they never became hugely successful. The later C128 added compatibility with C64, and was somewhat more successful, but it was essentially too little too late.

I actually won a C16 at a computer fair when it was first released, so I played a bit with it. But since I already had a BBC Computer, I sold it off fairly soon. I did like the larger colour palette of the C16, but apart from that the BBC was far better.

Torben Mogensen

Re: Anyone who has a blanket rule banning GO TOs...

"Also since there was no defined block structure, one subroutine could GOTO somewhere inside another and borrow its RETURN, so it was quite possible for one exit point to serve many subroutines."

Another common practice was source-code level tail-call optimisation: The sequence GOSUB N: RETURN was replaced by GOTO N. Sure, it saves stack space and time, but it makes the code harder to read.

Torben Mogensen

Re: Anyone who has a blanket rule banning GO TOs...

"A GOTO with a hard-coded number is almost structured programming. :-) It's GOTO N where N is a variable that's the killer."

FORTRAN required that such a computed GOTO statement listed all the possible values of N. This made control-flow analysis easier.

Torben Mogensen

Re: Anyone who has a blanket rule banning GO TOs...

"It's a cute meme, but the headline applied to Dijkstra's letter misses the underlying problem: it's not the GOTO, it's the Where From.

When using GOTO, the destination is unlinked. You look at a line of code (BASIC was a line-oriented language), and you have no idea how you got there."

I have designed and implemented several (low-level) languages using GOTO-like jumps to labels, but the rule is that every label must occur exactly twice: Once in a jump, and once in an entry point to a statement. If you want several places to jump to the same statement, you must specify multiple labels at that statement. This makes it easy to find where jumps to a statement come from. This also makes some transformations (such as program inversion and specialisation) simpler.

With Gelsinger gone, to fab or not to fab is the $7B question

Torben Mogensen

The US needs a local foundry

I can't imagine the US military wanting chips that are manufactured in Asia, so for national security reasons, the US will want the Intel foundry to survive on US hands. The CHIPS act mentioned in the article is a step towards this, but it may not be sufficient to ensure survival. So the state may force military contractors to use US-based foundries, with the Intel one as a priority (as it has be most advanced technology). It will not be easy, but I can't imagine USA allowing the Intel foundry to die, even if it means pouring massive amounts of cash into it.

Abstract, theoretical computing qualifications are turning teens off

Torben Mogensen

Computational thinking

I think the most important thing to teach school kids related to IT is computational thinking: Thinking about how to solve problems systematically: First by understanding the problem (which in many cases can be achieved by playing around with it), then deciding how to know when you have solved it, then breaking the solution process in to smaller steps, then implementing these steps, and finally checking if your proposed solution lives up the the criteria you decided before you started. Repeat as necessary. This is not far from Polya's "How to Solve It" method, though that is mostly targeted at maths.

Note that you don't need a computer to do or teach computational thinking -- you can perform the solution by hand and describe it in text or using diagrams (such as flow charts). For example, you can give the kids a shelf of books and ask them to sort the books alphabetically by author given certain constraints such as not taking out more than two books at a time. Another problem is the classic Tower of Hanoi puzzle, which you can use stacking cups (found in every toy store) for.

Only when the kids are familiar with solving such problems by manipulating physical objects by hand do you point them to a computer and ask them to implement the methods using abstract values such as numbers or strings.

Torben Mogensen

Re: WYF!

"Programming was BASIC cos that's all the BBC.micro had"

Not entirely accurate. It was all that the BBC came with when you bought it, but you could get Pascal, LISP, COMAL, and several other languages on ROM or loaded from disc.

But BBC BASIC was not the worst language to use. It had features fro structured programming (while loops, procedures, functions, etc.), which were not found in, for example C64 BASIC or Sinclair BASIC.

BASIC co-creator Thomas Kurtz hits END at 96

Torben Mogensen

Re: The bit of the brochure which gives me the shivers:

"My COSMIC ELF(?) "

That would be a COSMAC ELF: https://en.wikipedia.org/wiki/COSMAC_ELF

Torben Mogensen

Re: The bit of the brochure which gives me the shivers:

"Self-modifying code *can* be very space-efficient, with small op-code sets like this, I suppose, but all I can say is that I tried it once (6502 assembler) and it almost scrambled my brain."

Some early processors only had constant addresses for jumps, so to return from a procedure call, the caller would modify a jump instruction at the end of the procedure to jump to the correct place.

I, also, tried self-modifying code on a 6502 when making a sprite routine for my BBC Micro. To allow the sprite to overwrite, XOR, OR, or AND with the screen contents, I modified an instruction in the code for this operation. The alternative would have been either several almost-identical copies of the code or slower operation, but both memory and clock cycles were limited.

Torben Mogensen

My first programming language

was not BASIC, but close. It was (in 1976) COMAL, a BASIC-inspired language that had structured control statements and procedures/functions with parameters -- similar to the later BBC BASIC. COMAL ran on my high school's RC7000 computer (a rebadged Data General Nova), with ferrite memory and no screen -- we used a paper teletype terminal and punched tape.

I have programmed in many different BASIC variants over the years: Commodore BASIC (first on the PET 2001 and later on VIC 20 and C64), Sinclair BASIC, BBC BASIC, ABC-80 BASIC, BASIC for TI-81, and a few more. Of these, BBC BASIC was my clear favourite.

Without BASIC, the home computer revolution would have been very different. Few other languages could be implemented in a few KB of ROM. Forth was and alternative and used for Jupiter Ace, but it didn't get a huge following since Forth was too difficult for beginners. LISP might have been another alternative, but while a simple LISP interpreter can easily be implemented in 1KB or less, programs tend to use more memory than BASIC. LOGO was available for some 8-bit computers, but requires graphics or an external "turtle" to be interesting. LOGO also uses more memory than BASIC. Of these alternatives, BASIC is by far the easiest to learn, and it was sufficient for home computer programs that don't need other data structures than arrays.

Rust haters, unite! Fil-C aims to Make C Great Again

Torben Mogensen

Algol

"Make Algol Great Again.

As if it ever was!"

Tony Hoare remarked about Algol 60: "Here is a language so far ahead of its time that it was not only an improvement on its predecessors but also on nearly all its successors".

So, yeah, it was great. Also, C borrows a lot from Algol, mainly just replacing begin and end by { and } and shortening a lot of operator names to one or two characters. Oh, and removing all run-time safety properties.

Torben Mogensen

What is needed to make C safe

C (and, by extension, C++) is unsafe in so many ways that making a compiler + runtime system for C that makes it safe is bound to make programs run slower. So while this may be a solution for compiling "dusty deck" programs without modification for applications where a ×2 to ×5 slowdown is not important, I can't see a way around replacing C by languages that are safe by design for applications where speed is important.

It has long been known how to make C safe(r):

- Add garbage collection. Replace free() by a no-operation and free memory by GC. Because C can do all sorts of stuff with pointers, this requires conservative GC: Any value that could be a pointer into the heap is considered to be a pointer into the heap. So if an integer by chance happens to be in the range of values that (if it was cast to a pointer) points to the heap, we must preserve the object it points to. But C pointers need not point to the header of a heap allocated object: I can point anywhere from the start of an object to one word after its end. Anything else is undefined behaviour. So to identify objects, we need to know where objects start. This can be done by a global table of star and end addresses of heap objects, where the GC compares a value to these to find the header of the object. This gets expensive if there are many heap-allocated objects. Alternatively, every heap object starts with a 64-bit "magic word", which is a value that is unlikely to be generated by computation. You can then search backwards in memory until you find a magic word, and you have found the header of the object. Not 100% safe, but works most of the time. Alternatively, use fat pointers.

- Fat pointers are represented by two machine words: One that indicates the start of the object into which the pointer points, another that is the actual value of the pointer. This makes it easy to find the headers of objects, and you can also do range checking (as the headers indicate the size of objects). It makes pointers bigger, and range checking costs, but it allows precise GC. Casting integers to pointers (and back) is a problem, though. Like above, you can search for the header of the object to which the new pointer points (and report an error if it doesn't point to any), but this is costly and doesn't give strong guarantees. In addition to explicit casts, storing a non-pointer value into a union and taking it out as a pointer is problematic. So unions should be tagged with field indicators and checked when you store and read values from the union. And since any integer can be cast to a pointer, you can never be sure when a heap object is dead: It may be accessed later when an integer is cast to a pointer. There are coding tricks such as using XOR to traverse lists bidirectionally that will make this happen, so you can not guaranteed 100% memory safety.

So, it is a better solution to design a language where you can not cast integers (or any other value) to pointers, and where pointers always point to the headers of objects. This allows single-word pointers, and by reading size information from object headers, range checks can be made. You can no longer just increment a pointer in a loop to traverse an array (you have to use offsets from the base pointer), but that is a small cost -- usually, base+offset addressing is supported in instruction sets. And the compiler may do strength reduction to transform pointer+offset to direct pointer when it is safe to do so.

Unchecked unions should also be avoided, as should null pointers. You can use option types instead. Compilers can compile these into values where 0 means "none" and any non-zero value is a real pointer, so there is no run-time overhead (apart from checking if the value is 0, which is required to avoid following null pointers). Rust does this.

Implicit casts should also be avoided. An explicit cast need not have any runtime cost and not making them explicit is a sign of programmer laziness. Null-terminated strings are not exactly safe either.

Some will say that GC is costly. Well, malloc() and free() are not exactly free either, and they are prone to fragmentation which can not be avoided as long as you can cast integers to and from pointers, as this prevents compacting the heap to close gaps.

The US government wants developers to stop using C and C++

Torben Mogensen

Re: Don't forget about Coq

"Program verification with Coq should discover vulnerabilities even in C program."

Coq and similar verification systems can be used to prove absence of certain vulnerabilities, but it is not an entirely automatic process. In all but the simplest cases, it requires considerable manual work and even re-coding programs to be better behaved. And it often uses a lot of compute resources.

Part of the reason for this is that absence of vulnerabilities is an undecidable property it at least one program in the language in question can have vulnerabilities. For example, it any program can follow a pointer to freed memory, deciding if any given program can do this is undecidable. Sure, you can often show that programs do or do not have these, but doing it in general for all programs is not possible. And as programs increase in size, the probability that it escapes proof increases.

The only reasonable way to guarantee against specific vulnerabilities such as read-after-free is to ensure that they can never happen in any program. This requires new languages and not attempts to prove absence of vulnerabilities in programs written in languages that allow such vulnerabilities.

The only other alternative is to reduce the computational power of programming languages to make the properties decidable for all programs, i.e., using languages that are not Turing complete. And even then, decidability does not imply effective decidability. Deciding a property may take extremely long time, even if you are guaranteed to eventually get an answer.

Torben Mogensen

Temporary fix

For low-level system work, such as the Linux kernel, you need hard real-time constraints, so you can not use garbage collection where there can be unpredictable pauses of a few milliseconds. Yes, concurrent GC exists, and it can reduce most of these pauses, but it can not eliminate them entirely. This is where the Rust model or similar is required: Memory safety without unpredictable pauses.

But C and C++ are used for many applications that do not need hard real-time constraints. For these applications, it would be acceptable to compile C/C++ with index checks and similar to prevent buffer overruns and a conservative garbage collector, which manages memory to avoid use-after-free and never-free errors (which cause space leaks). It will impose a small performance overhead, but not too much. Most index checks can be eliminated by fairly simple analyses.

But, let's face it, C and C++ were designed to be close to a machine model (single core, flat memory) that was outdated 20 years ago, so it is only a matter of time before they become irrelevant. Sure, a lot of people will continue using them, but new programmers will choose languages that are a better fit for the highly parallel compute models of today. And, sure, a lot of C/C++ code will stick around even longer, as the effort of rewriting it will be prohibitive. But they will no longer be used for tasks where performance is critical, nor for tasks where safety is critical.

Fujitsu claims 634-gram 14-inch Core i7 laptop is world's lightest

Torben Mogensen

LG Gram

LG made in 2019 a sub-1kg 17" laptop. 14" screens have an area that is 68% of 17" screens, so the weight of the two seems more or less proportional to screen area.

Intel, AMD team with tech titans for x86 ISA overhaul

Torben Mogensen

Merger?

In the past AMD and Intel were not allowed to merge due to monopoly concerns. But you can argue that x86 is no longer a monopoly, as ARM powers an increasing fraction of computers (and RISC-V is slowly rising too). So I don't see it as impossible that the two merge. Still selling under their original brands, but sharing more technology and making common business decisions to differentiate the brands more by targeting different segments.

Ex-Intel board members make an ill-conceived case for spinning off Foundry

Torben Mogensen

Sell it to Musk

He is already known for throwing considerable funds to questionable projects and acquisitions (and he can afford it). And he is not a competitor to ARM, Nvidia, etc.

There’s no way Qualcomm is buying Intel as is

Torben Mogensen

Maybe only the foundry?

As far as I recall, Qualcomm has no foundry of its own, so it might be mainly this that interests them. They have plenty of chip designers, and their interest in x86 is probably low. Some of Intel's patents may be interesting to Qualcomm, though I can't say which.

So Qualcomm may bid to acquire Intel's foundry (which has recently been split off) to fabricate its own chips and license to others (including what remain of Intel).

What is this computing industry anyway? The dawning era of 32-bit micros

Torben Mogensen

Re: was that it supported multiple CPUs

Already RISC OS 2 in 1988 or '89 had may features that didn't make it into Windows until Windows 95 or later. Its main problem was that it was written in ARM assembler, so updating it to use multiple CPU cores was a major effort. But recall that the move to multicore CPUs didn't happen until about 2005. Until then, the trend was ever faster single cores. But upping the clock frequency increases power usage quadratically, so you get more bang for the Watt by having four cores running at 2GHz than one core running at 4GHz (which would use the same power). But that was almost two decades after RISC OS was designed, so you can't really blame Acorn for not preparing for that. After all, it was only about half a decade between BBC B and the first Archimedes computers.

And ARM did become faster. ARM2 (using in the first Archimedes) was 8 MHz, ARM3 (used in A5000) was 25-33 MHz, ARM710 (used in the first RISC PC) was 40 MHz, and StrongARM (used in later RISC PC models) was 200+ MHz. XScale (Intel's version of StrongARM) upped this to 400+ MHz. But by 2000 it was clear that ARM was targeted to mobile devices, so low power usage came at the cost of not being at the bleeding edge of performance. ARM has since then broadened its scope to include servers and supercomputers as well as desktop PCs (Apple) and mobile devices. But most of these now use the 64-bit instruction set, which RISC OS can not handle, being written in 32-bit assembly language.

I originally bought my Archimedes for its hardware, but later came to appreciate it more for its software -- RISC OS was far ahead of the competition, and there was good software for word processing and graphics. And its font manager which used the same rendering engine for print and screen (which didn't happen until much later on Mac and Windows), so you really got what you saw. But RISC OS these days is more or less stuck in the 1990s, with little new software and a lack of compatibility. I don't think there is a Libre Office port, nor Chrome, Opera or Firefox.

Sweet 16 and making mistakes: More of the computing industry's biggest fails

Torben Mogensen

Re: Sinclair QL wasn't 16 bit

(About the 432). It didn't help that Intel decided to use bit-level addressing.

Where the computer industry went wrong – the early hits

Torben Mogensen

> Since nobody has done so yet, I will now make a vague comment about code density on Z80 versus code density on 6502, and how that means that the machines' ROM sizes are not directly comparable, and then wander off without giving any specific examples or anything.

Code density wasn't really that different. You needed more instructions on the 6502, but they were shorter than on the Z80. This article: https://web.eece.maine.edu/~vweaver/papers/iccd09/iccd09_density.pdf has the total benchmark size on 6502 to be just over 1024 bytes while Z89 is just under. But these are compiler-generated codes, and the 6502 is notoriously difficult for compilers to generate dense code for. And some of the benchmarks are string copying and similar, where z80 has dedicated instructions.

Torben Mogensen

Re: Two significant characters?

On the PET and VIC20, it was definitely true that only the first two characters were significant. I'm not sure about C64, but probably.

But I once worked with a computer with an even more severe limitation: On the Swedish ABC80 computer, variable names were either one letter or one letter followed by one digit. To make matters worse, O and 0 looked exactly the same on the screen, so it was impossible to see if you mistyped 0 as O or vice versa (which could happen, as the keys are close). It also had the oddity of suffixing integer constants by % (like integer variables), so O% is a variable and 0% is a constant.

Torben Mogensen

Re: Huge gaps in this history

Unlike what the article author claims, BBC owners didn't use mode 7 much, and games certainly didn't, except for menu screens etc. But he is correct that the 32K limit on memory was a serious limitation, but the high price was also a problem. Various independent companies made RAM extensions, some of which allowed the video memory to use the same address space as the ROM, and Acorn later made similar extensions in the BBC+ and Master, but these were even more expensive.

To compete against the lower-cost rivals C64 and Spectrum, Acorn made the Electron, a cut-down version of BBC B. Not only was mode 7 omitted (a decent emulation using a 16-colour mode was made, but it used much more memory), but they used 64K×4 bit memory chips and fiddled with the addresses so it emulated 32K 8-bit memory. This made it much slower, as two memory accesses were required for fetching one byte, and it also delayed production because it was difficult to get right. IMO, they should have doubled the memory to 64K and mapped the video into the high addresses (as the BBC B extensions did), even it it increased the cost a bit. This would have made it a more serious rival to C64. What started the Acorn downfall was that they had produced a huge number of Electrons for a Christmas market that never materialised because too many children already got computers the previous Christmas, so most ended in storage, generating a huge loss.

While Acorn reclaimed some of their market with the Archimedes series, it came out after the Amiga, which hurt sales, and it never got the same selection of games than the Amiga had. It was also more expensive, following the Acorn tradition of making lovely, but expensive kit.

Woman uses AirTags to nab alleged parcel-pinching scum

Torben Mogensen

Re: "police declined to pursue the matter"

"I wonder what their policy is on murder ? Never on a Monday ?"

- Tell me why?

- I don't like Mondays, I wanna shoot the whole day down

Intel's processor failures: A cautionary tale of business vs engineering

Torben Mogensen

Intel hasn't really been technologically successful since the 8080

Intel basically invented the microprocessor with the 4040 and 8080, but since then their track record at being innovative have not been good:

- They planned on the iAPX 432 to be their flagship product, and designed the 8086 mainly as a stop-gap measure. iAPX 432 failed, and 8086 succeeded mainly by being chosen by IBM for their PC. And that platform succeeded mainly because IBM didn't get exclusive rights to the OS, so lots of clones appeared and many PC-makers (such as Commodore) dropped their own product lines to make IBM clones.

- Intel was moderately successful with the 386 and Pentium (which were not really innovative, but clamp-on 32-bit extensions to 8086), but they still rode on the MS-DOS/Windows success.

- Intel then planned on Itanium being their 64-bit platform, but as that failed to gain traction and AMD had success with their own 64-bit variant of the x86 platform, Intel had to make their own versions of the AMD processor.

- Intel, as mentioned earlier, didn't expect the mobile phone market to be so lucrative, and only made a half-hearted effort (the Atom) in making a processor for that market.

- Intel insisted on making complete processor chips instead of selling cores that other companies could add to -- something that ARM did quite successfully.

- Intel has failed to gain real traction on high-end graphics processors, leaving Nvidia and AMD as the main players here.

So, in my book, Intel has since 1980 mainly been successful as a manufacturer of chips and not so much as a designer of same. They have seen the mobile devices market go to ARM, the graphics market go to Nvidia and AMD, lost the Apple processor market to ARM, and are beginning to lose the server market also to ARM. Their great millennium efforts (iAPX 43 and Itanium) failed because Intel forgot to keep things simple.

Torben Mogensen

Re: A Pentium joke from 1994

A friend of my made this limerick:

There once was a chip from Intel

Whose floating point unit was hell

With every division

It lost some precision

But they hoped that noone could tell.

Ten years ago Microsoft bought Nokia's phone unit – then killed it as a tax write-off

Torben Mogensen

Symbian

Symbian was essentially just a renaming of Psion's EPOC32 OS, developed for their Organiser series. Psion, Nokia, Ericsson, and Motorola formed a joint venture around the OS and Psion was renamed to Symbian Ltd (which was later bought by Nokia).

NASA needs new ideas and tech to get Mars Sample Return mission off the ground

Torben Mogensen

Assemble in orbit

One way to reduce cost is to ship parts to ISS (over multiple missions) and assemble the space ship there. You don't need fancy automatic unfolding of solar panels etc, this can be done by astronauts in space.

AI chemist creates catalysts to make oxygen using Martian meteorites

Torben Mogensen

Oxygen is not the (main) problem

Oxygen is easily found on both the Moon, Mars, and even asteroids in the form of oxides like iron oxide (rust). Hydrogen is much more of a problem, and you need that for water and hydrocarbons. On the Moon, hydrogen is only (so far) found as a light dusting of hydrogen-containing molecules caused by solar winds, so it will be a major problem. On Mars, it is still possible to find water near the poles and underground, so it will probably be less of an issue there.

That said, a good catalyst for extracting oxide out of the oxides in Martian soil is not a bad thing.

Sorry Pat, but it's looking like Arm PCs are inevitable

Torben Mogensen

Intel making ARMs?

As the StrongArm threads mention, Intel once made ARM processors. They could do so again. Intel is both a processor design company and a chip production company. They have traditionally preferred to produce only processors of their design, but if x86 becomes less popular, Intel may look for other architectures. They have not had a good track record of designing successors to the x86 line -- i432 was no success, and Itanium not really either in spite of being hyped enough that other companies stopped development on their own processor designs (Compaq stopped developing Alpha, Silicon Graphics stopped developing MIPS, and HP stopped developing PA-RISC, all jumping on the Itanium bandwagon). Even x86-64 was not their own design -- AMD did that. So they may have to admit defeat and make ARM compatible processors alongside x86. With their experience in production technology, they would probably be able to make a competitive ARM design. They might even do processors that can run both x86 and ARM, having cores for both instruction sets or even making cores that can switch between the two.

US AGs: We need law to purge the web of AI-drawn child sex abuse material

Torben Mogensen

Violence in films and games?

There have been a long debate on whether seeing violence in films and games (where no actual violence towards people or animals has happened) would make "impressionable" people more likely to commit acts of violence. So far, there has been no evidence that this is true -- except questionable studies that looks at people that have committed violence and see that they have seen such films and played such games and concluded this is why they did what they did without considering other possible causes. The implication might very well be the inverse.

So simply assuming that watching AI-generated child porn would make people more likely to commit real-life abuse is questionable, and making laws on such an assumption even more so. It could even be that "evil desires" could be sated by AI-generated images. After all, the number of voyuer cases dropped after porn became legal.

NASA to outdo most Americans on internet speeds, gigabit kit heading to the ISS

Torben Mogensen

Hot singles?

A decent amount of hot, single hydrogen atoms for sure.

US Air Force wants $6B to build 2,000 AI-powered drones

Torben Mogensen

Not your garden variety drone

I expect one reason for the (for drones) rather high cost is that they need to be supersonic to work alongside manned fighter jets. I don't think any supersonic fighter drone is in production at this time (though there are several prototypes, including the British BAE Taranis, though not much has happened with that lately). China has a supersonic surveillance drone in production (https://en.wikipedia.org/wiki/AVIC_WZ-8), but you need higher manoeuvrability for fighter drones.

But there are countless advantages of unmanned fighter jets: You don't need life support, they can (for that reason) be smaller, which aids manoeuvrability, and they can withstand much higher G-force than a human. I agree that there should be limits to autonomy: They should definitely not choose their targets, but it might be interesting to allow the AI to refuse a target if it finds too many civilians nearby. And the AI can definitely handle navigation and evasive manoeuvres on its own.

Quantum computing: Hype or reality? OVH says businesses would be better off prepared

Torben Mogensen

QC will never break strong encryption

I saw a talk from our local QC expert (University of Copenhagen) about the Quantum Fourier Transform (QFT), which is the basis for pretty much all of the algorithms that can break encryption in polynomial time on a QC, as well as quantum chemistry algorithms. After presenting the algorithm, he talked about it limitations with respect to realistic quantum computers. Specifically:

1. QFT needs perfect qubits (never gonna happen) or strong error-correction, which means around 1000 realistic qubits for each error-correcting cubit, so we need around a million realistic qubits. Currently, we are about 100 of those.

2. We need rotation of a quantum state in complex-number hypersphere to a precision of better than 1/2^n parts of a full rotation (for n-bit numbers). Currently, we are around 1/2³, and we might reach 1/2⁴ in a couple of years. 1/2²⁵⁶ will not happen in the next 50 years (if ever), and by then strong encryption will use many more bits.

3. We need quantum computers that can sustain a superposition for thousands of complex operations. We are currently at a few hundred single-qubit operations and a few dozen two-cubit operations (specifically, controlled negation).

4. Qubits can only interact with their nearest neighbours (in a square or hexagonal grid), and to bring cubits that need to interact next to each other, we need a lot of swaps. A swap can be done using three controlled negations, so we can do less than 10 of those currently.

So, while quantum computers _may_ become useful for very specialised purposes, it will IMO never break strong encryption. Quantum superposition can, however, itself be used for secret communication, but that isn't really quantum computing.

India gives itself a mission to develop a 1000-qubit quantum computer in just eight years

Torben Mogensen

Waste of money

I don't see quantum computers having any impact outside very specialised areas (code breaking not being one of them). Most algorithms that claim quantum superiority rely on the Quantum Fourier Transform, and that requires highly error-correcting qubits and exact rotations in quantum vector spaces to a precision of 1/2^n parts of a full rotation (for n bits). A college of mine who works with quantum computers say that you need at least 1000 qubits to make one error-correcting qubit, and that the current precision of rotation is about 1/8 of a full rotation. So 1000 qubits is not going to shake anything -- you need about a million to get enough error-correcting qubits for breaking RSA, and about 2¹ºº times better precision for rotations. And well before that, encryption will use more bits (and better algorithms), so quantum computers will never catch up.

India would get much more payback by investing in research in solar cell and battery technology -- that is definitely going to have an economic impact, and it is much more realistic than QC.

NASA, DARPA to go nuclear in hopes of putting boots on Mars

Torben Mogensen

Jules Verne

> Steam powered rockets! Something very Jules Verne about that.

Shooting the spacecraft out of a cannon would be more Vernian. Something like https://www.space.com/spinlaunch-aces-10th-suborbital-test-launch

Bringing the first native OS for Arm back from the brink

Torben Mogensen

64-bit port

I think a port to 64-bit ARM should try to rewrite as much as possible to a high-level language, so it can be recompiled on other platforms. Some parts of the kernel may very well need to be written in assembly language, but this should be minimized, perhaps by refactoring the kernel to separate the hardware-dependent parts from the hardware-independent parts.

An interface that allows Linux or BSD drivers to be used with RISC OS would also be useful, as it would open up a lot of external devices. Something that can use graphics processors effectively (such as OpenGL and OpenCL interfaces) would also be nice.

While my first computers were a BBC Model B, an Archimedes 400, and an A5000, what I loved about RISC OS was not so much that it ran on ARM, but some of the features it offered that no other OS at the time did, and few do today:

- A font manager and anti-aliasing renderer that gave identical (up to resolution) output on screen and print. Printing was a bit slow, as a bit map was sent to the printer, but the benefits were enormous.

- A common interface for saving and loading files by drag-and-drop.

- Applications-as-folders.

- File types that do not depend on file-name extensions. I wish this had been extended to folder types too, so we could avoid the ! in application names.

- Select, Menu, and Modify Mouse buttons. Especially the pop-up menus are nice.

- Easy-to-use graphics. The effort it takes to open a graphics window and draw a line on other systems is just ridiculous.

But RISC OS is lagging increasingly behind other systems, especially where device support is concerned. It also has poor security. The main reason it is not infested with malware is that nobody bothers to make it.

Time Lords decree an end to leap seconds before risky attempt to reverse time

Torben Mogensen

Let it slide

I don't see much point in leap seconds if the purpose is to synchronize time with the sidereal period of Earth around the Sun. It has no practical relevance, and the new year isn't even at (the northern hemisphere) winter solstice, and midnight isn't at 24:00 except in very few places, so why not let it slide?

We might as well get rid of leap days too. Yes, this will make the new year slide more quickly from the winter solstice, but why should this matter? It's not like people sow and harvest according to the calendar any more. And time zones. These were introduced because each town had its own time that deviated slightly from neighbouring times. The main motivation for synchronizing time was for planning train schedules. Time zones are now oddly shaped and some even differ only by 30 minutes from the neighbouring zones (and there are more than 24 zones). We could get rid of this complexity by using TAI globally (without offsets). So what if school starts at 14:00 some places on earth and at 04:00 in other places? Yes, the few places that use AM and PM will need to get used to 24 hour time, but this is long overdue anyway.

And while we are at it, months of unequal length that are not even lunar months is a mess. Let us have twelve 30-day months per year, even if this slides by 5.256 days relative to solstice every year. 360 days is a nice round number of days per year -- it divides evenly into thirds, quarters, tenths, and more. And weeks of seven days do not fit with anything, so let them be six days, so there are exactly five of these per month. Four work/school days before the weekend sounds fine to me.

Meta proposes doing away with leap seconds

Torben Mogensen

Re: Do we need leap seconds?

I agree, but why even adjust at all? In modern society, there is no real need for the calendar year to coincide with the solar year. Already now it doesn't: Midwinter is ten days before New Year, so letting it slide even further doesn't matter. Our months don't coincide with the phases of the moon, so why should our year coincide with the Earth's orbit around the Sun.

We could even drop leap days every (roughly) four years, and it wouldn't matter. We could even decide that every other month is 30 days and the rest 31 days (getting rid of the irregularity of February), making a year 366 days instead of 365.25 days. Or we could make months 30 days each, which makes a year 360 days, which divides evenly by many numbers -- the reason we have 360 seconds to an hour and 360 degrees to a circle. And while we are at it, drop time zones and use CET everywhere. So what if school starts at 03:00 in some countries and at 17:00 in other countries?

Astronomers already use a different year that aligns with the positions of stars (other than our own), called the sidereal year, so they can keep using astronomically accurate time, but the rest of us don't have to.

You're not wrong. The scope for quantum computers remains small

Torben Mogensen

I see more future in extreme low power computing

One of the main barriers to parallelism today is power consumption (and the related need for cooling), so in terms of solving otherwise intractable problems, I see more future in extreme parallelism using extremely low-power processing elements. Sure, it won't reduce asymptotic complexity of hard problems, but it will allow larger problems to be solved. My laptop can, using its graphics processor, in seconds solve problems that required hours of supercomputer time twenty years ago. Sure, graphics processors use a lot of power, but per compute element it is much less than a CPU. Reducing the power use even further will allow more parallelism.

Radical reduction in power usage will probably require something other than shrinking or otherwise improving CMOS technology. Exactly which technology will replace CMOS is not clear. Nanomagnets or superconducting materials have potential for extreme low power, but require complex setups (such as extreme cooling), but this is not so different from the requirements of quantum computers. Carbon nanotubes is a another possibility. Landauer's principle (https://en.wikipedia.org/wiki/Landauer%27s_principle), extreme low power computation may require restriction to reversible operations, but this is true also of quantum computation (unitary operations are reversible).

RISC OS: 35-year-old original Arm operating system is alive and well

Torben Mogensen

Re: Some features i would like today

I would really like to see the file-types concept to be extended to cover also directory types. As it is, directories with names starting with ! are apps, but that would be better done as a directory type. I also recall that some word processors saved documents as apps, so pictures etc. were stored in separate files in the directory, but you could still open the document by clicking the icon. This, too, would be better handled by directory types, so the documents did not need names starting with !.

Page: