* Posts by Torben Mogensen

503 publicly visible posts • joined 21 Sep 2006

Page:

AI chemist creates catalysts to make oxygen using Martian meteorites

Torben Mogensen

Oxygen is not the (main) problem

Oxygen is easily found on both the Moon, Mars, and even asteroids in the form of oxides like iron oxide (rust). Hydrogen is much more of a problem, and you need that for water and hydrocarbons. On the Moon, hydrogen is only (so far) found as a light dusting of hydrogen-containing molecules caused by solar winds, so it will be a major problem. On Mars, it is still possible to find water near the poles and underground, so it will probably be less of an issue there.

That said, a good catalyst for extracting oxide out of the oxides in Martian soil is not a bad thing.

Sorry Pat, but it's looking like Arm PCs are inevitable

Torben Mogensen

Intel making ARMs?

As the StrongArm threads mention, Intel once made ARM processors. They could do so again. Intel is both a processor design company and a chip production company. They have traditionally preferred to produce only processors of their design, but if x86 becomes less popular, Intel may look for other architectures. They have not had a good track record of designing successors to the x86 line -- i432 was no success, and Itanium not really either in spite of being hyped enough that other companies stopped development on their own processor designs (Compaq stopped developing Alpha, Silicon Graphics stopped developing MIPS, and HP stopped developing PA-RISC, all jumping on the Itanium bandwagon). Even x86-64 was not their own design -- AMD did that. So they may have to admit defeat and make ARM compatible processors alongside x86. With their experience in production technology, they would probably be able to make a competitive ARM design. They might even do processors that can run both x86 and ARM, having cores for both instruction sets or even making cores that can switch between the two.

US AGs: We need law to purge the web of AI-drawn child sex abuse material

Torben Mogensen

Violence in films and games?

There have been a long debate on whether seeing violence in films and games (where no actual violence towards people or animals has happened) would make "impressionable" people more likely to commit acts of violence. So far, there has been no evidence that this is true -- except questionable studies that looks at people that have committed violence and see that they have seen such films and played such games and concluded this is why they did what they did without considering other possible causes. The implication might very well be the inverse.

So simply assuming that watching AI-generated child porn would make people more likely to commit real-life abuse is questionable, and making laws on such an assumption even more so. It could even be that "evil desires" could be sated by AI-generated images. After all, the number of voyuer cases dropped after porn became legal.

NASA to outdo most Americans on internet speeds, gigabit kit heading to the ISS

Torben Mogensen

Hot singles?

A decent amount of hot, single hydrogen atoms for sure.

US Air Force wants $6B to build 2,000 AI-powered drones

Torben Mogensen

Not your garden variety drone

I expect one reason for the (for drones) rather high cost is that they need to be supersonic to work alongside manned fighter jets. I don't think any supersonic fighter drone is in production at this time (though there are several prototypes, including the British BAE Taranis, though not much has happened with that lately). China has a supersonic surveillance drone in production (https://en.wikipedia.org/wiki/AVIC_WZ-8), but you need higher manoeuvrability for fighter drones.

But there are countless advantages of unmanned fighter jets: You don't need life support, they can (for that reason) be smaller, which aids manoeuvrability, and they can withstand much higher G-force than a human. I agree that there should be limits to autonomy: They should definitely not choose their targets, but it might be interesting to allow the AI to refuse a target if it finds too many civilians nearby. And the AI can definitely handle navigation and evasive manoeuvres on its own.

Quantum computing: Hype or reality? OVH says businesses would be better off prepared

Torben Mogensen

QC will never break strong encryption

I saw a talk from our local QC expert (University of Copenhagen) about the Quantum Fourier Transform (QFT), which is the basis for pretty much all of the algorithms that can break encryption in polynomial time on a QC, as well as quantum chemistry algorithms. After presenting the algorithm, he talked about it limitations with respect to realistic quantum computers. Specifically:

1. QFT needs perfect qubits (never gonna happen) or strong error-correction, which means around 1000 realistic qubits for each error-correcting cubit, so we need around a million realistic qubits. Currently, we are about 100 of those.

2. We need rotation of a quantum state in complex-number hypersphere to a precision of better than 1/2^n parts of a full rotation (for n-bit numbers). Currently, we are around 1/2³, and we might reach 1/2⁴ in a couple of years. 1/2²⁵⁶ will not happen in the next 50 years (if ever), and by then strong encryption will use many more bits.

3. We need quantum computers that can sustain a superposition for thousands of complex operations. We are currently at a few hundred single-qubit operations and a few dozen two-cubit operations (specifically, controlled negation).

4. Qubits can only interact with their nearest neighbours (in a square or hexagonal grid), and to bring cubits that need to interact next to each other, we need a lot of swaps. A swap can be done using three controlled negations, so we can do less than 10 of those currently.

So, while quantum computers _may_ become useful for very specialised purposes, it will IMO never break strong encryption. Quantum superposition can, however, itself be used for secret communication, but that isn't really quantum computing.

India gives itself a mission to develop a 1000-qubit quantum computer in just eight years

Torben Mogensen

Waste of money

I don't see quantum computers having any impact outside very specialised areas (code breaking not being one of them). Most algorithms that claim quantum superiority rely on the Quantum Fourier Transform, and that requires highly error-correcting qubits and exact rotations in quantum vector spaces to a precision of 1/2^n parts of a full rotation (for n bits). A college of mine who works with quantum computers say that you need at least 1000 qubits to make one error-correcting qubit, and that the current precision of rotation is about 1/8 of a full rotation. So 1000 qubits is not going to shake anything -- you need about a million to get enough error-correcting qubits for breaking RSA, and about 2¹ºº times better precision for rotations. And well before that, encryption will use more bits (and better algorithms), so quantum computers will never catch up.

India would get much more payback by investing in research in solar cell and battery technology -- that is definitely going to have an economic impact, and it is much more realistic than QC.

NASA, DARPA to go nuclear in hopes of putting boots on Mars

Torben Mogensen

Jules Verne

> Steam powered rockets! Something very Jules Verne about that.

Shooting the spacecraft out of a cannon would be more Vernian. Something like https://www.space.com/spinlaunch-aces-10th-suborbital-test-launch

Bringing the first native OS for Arm back from the brink

Torben Mogensen

64-bit port

I think a port to 64-bit ARM should try to rewrite as much as possible to a high-level language, so it can be recompiled on other platforms. Some parts of the kernel may very well need to be written in assembly language, but this should be minimized, perhaps by refactoring the kernel to separate the hardware-dependent parts from the hardware-independent parts.

An interface that allows Linux or BSD drivers to be used with RISC OS would also be useful, as it would open up a lot of external devices. Something that can use graphics processors effectively (such as OpenGL and OpenCL interfaces) would also be nice.

While my first computers were a BBC Model B, an Archimedes 400, and an A5000, what I loved about RISC OS was not so much that it ran on ARM, but some of the features it offered that no other OS at the time did, and few do today:

- A font manager and anti-aliasing renderer that gave identical (up to resolution) output on screen and print. Printing was a bit slow, as a bit map was sent to the printer, but the benefits were enormous.

- A common interface for saving and loading files by drag-and-drop.

- Applications-as-folders.

- File types that do not depend on file-name extensions. I wish this had been extended to folder types too, so we could avoid the ! in application names.

- Select, Menu, and Modify Mouse buttons. Especially the pop-up menus are nice.

- Easy-to-use graphics. The effort it takes to open a graphics window and draw a line on other systems is just ridiculous.

But RISC OS is lagging increasingly behind other systems, especially where device support is concerned. It also has poor security. The main reason it is not infested with malware is that nobody bothers to make it.

Time Lords decree an end to leap seconds before risky attempt to reverse time

Torben Mogensen

Let it slide

I don't see much point in leap seconds if the purpose is to synchronize time with the sidereal period of Earth around the Sun. It has no practical relevance, and the new year isn't even at (the northern hemisphere) winter solstice, and midnight isn't at 24:00 except in very few places, so why not let it slide?

We might as well get rid of leap days too. Yes, this will make the new year slide more quickly from the winter solstice, but why should this matter? It's not like people sow and harvest according to the calendar any more. And time zones. These were introduced because each town had its own time that deviated slightly from neighbouring times. The main motivation for synchronizing time was for planning train schedules. Time zones are now oddly shaped and some even differ only by 30 minutes from the neighbouring zones (and there are more than 24 zones). We could get rid of this complexity by using TAI globally (without offsets). So what if school starts at 14:00 some places on earth and at 04:00 in other places? Yes, the few places that use AM and PM will need to get used to 24 hour time, but this is long overdue anyway.

And while we are at it, months of unequal length that are not even lunar months is a mess. Let us have twelve 30-day months per year, even if this slides by 5.256 days relative to solstice every year. 360 days is a nice round number of days per year -- it divides evenly into thirds, quarters, tenths, and more. And weeks of seven days do not fit with anything, so let them be six days, so there are exactly five of these per month. Four work/school days before the weekend sounds fine to me.

Meta proposes doing away with leap seconds

Torben Mogensen

Re: Do we need leap seconds?

I agree, but why even adjust at all? In modern society, there is no real need for the calendar year to coincide with the solar year. Already now it doesn't: Midwinter is ten days before New Year, so letting it slide even further doesn't matter. Our months don't coincide with the phases of the moon, so why should our year coincide with the Earth's orbit around the Sun.

We could even drop leap days every (roughly) four years, and it wouldn't matter. We could even decide that every other month is 30 days and the rest 31 days (getting rid of the irregularity of February), making a year 366 days instead of 365.25 days. Or we could make months 30 days each, which makes a year 360 days, which divides evenly by many numbers -- the reason we have 360 seconds to an hour and 360 degrees to a circle. And while we are at it, drop time zones and use CET everywhere. So what if school starts at 03:00 in some countries and at 17:00 in other countries?

Astronomers already use a different year that aligns with the positions of stars (other than our own), called the sidereal year, so they can keep using astronomically accurate time, but the rest of us don't have to.

You're not wrong. The scope for quantum computers remains small

Torben Mogensen

I see more future in extreme low power computing

One of the main barriers to parallelism today is power consumption (and the related need for cooling), so in terms of solving otherwise intractable problems, I see more future in extreme parallelism using extremely low-power processing elements. Sure, it won't reduce asymptotic complexity of hard problems, but it will allow larger problems to be solved. My laptop can, using its graphics processor, in seconds solve problems that required hours of supercomputer time twenty years ago. Sure, graphics processors use a lot of power, but per compute element it is much less than a CPU. Reducing the power use even further will allow more parallelism.

Radical reduction in power usage will probably require something other than shrinking or otherwise improving CMOS technology. Exactly which technology will replace CMOS is not clear. Nanomagnets or superconducting materials have potential for extreme low power, but require complex setups (such as extreme cooling), but this is not so different from the requirements of quantum computers. Carbon nanotubes is a another possibility. Landauer's principle (https://en.wikipedia.org/wiki/Landauer%27s_principle), extreme low power computation may require restriction to reversible operations, but this is true also of quantum computation (unitary operations are reversible).

RISC OS: 35-year-old original Arm operating system is alive and well

Torben Mogensen

Re: Some features i would like today

I would really like to see the file-types concept to be extended to cover also directory types. As it is, directories with names starting with ! are apps, but that would be better done as a directory type. I also recall that some word processors saved documents as apps, so pictures etc. were stored in separate files in the directory, but you could still open the document by clicking the icon. This, too, would be better handled by directory types, so the documents did not need names starting with !.

Torben Mogensen

RISC OS needs to be rewritten in a high-level language

RISC OS is mainly ARM assembler, and this means that it is locked to 32-bit ARM processors, while the rest of the world is moving to 64-bit processors. There are even ARM processors now that are 64-bit only, so they can't run 32-bit code.

Porting to 64-bit ARM assembler is just another dead end, so the OS needs to be rewritten to a high-level language with minimal assembly code. Making RISC OS into a Linux distribution that maintains as many features of RISC OS as possible might even be a reasonable choice.

Torben Mogensen

Re: Some features i would like today

I don't see file types as worse than three-letter file-name extensions. Even with file-name extensions, programs need to know what they mean -- you can't just create an new extension such as .l8r and expect programs to know what to do. The OS needs to assign default applications to each file-name extension, which isn't much different from assigning default applications to file types, as RISC OS does by setting OS variables (which can also assign icons to file types). And there is the same risk of the same file-name extension or type being used for different purposes, creating confusion. I have observed this several times with file-name extensions.

The main problem with RISC OS file types is that there are too few: 12 bits gives 4096 different types, where three alphanumeric characters (case insensitive) give 46656 different file-name extensions, which is over ten times as many. 32-bit file types would work in most cases, and picking a random type is unlikely to clash with other types. 64 bits would be even better.

Also, a lot of programs expect file-name extensions and take, for example, a .tex file and produces .aux, .log, .pdf and so on with the same root name. If these are compiled for RISC OS, the root name would be a directory containing files named tex, aux, log, and pdf (since RISC OS uses . as directory separator). In some cases, it is nice that all these files are combined in a single folder, but you often need to copy the pdf file to some other location, and then you would lose the root name, unless you rename the file. You could modify LaTeX and other programs to use file types instead, but that would be a major effort. The best solution is probably to have a wrapper for Linux programs that, in a temporary directory, modifies file type to file-name extensions (using subdirectories) and afterwards convert back to file types. This wrapper could be generic for all Linux command-line programs as long as the wrapper has a translation of all the file-name extensions and file types involved. Has one such ever been made?

The sad state of Linux desktop diversity: 21 environments, just 2 designs

Torben Mogensen

RISC OS

As the author indicated, RISC OS had (from around 1989) many features that didn't make it into Windows before Win95 (and some that no later system has). Apart from purely cosmetic things like marbled title and scroll bars (that by the size of the bar shows how large a fraction of the contents is shown, something that didn't make into competing systems until much later), RISC OS use a three-button mouse with the following functions:

Left ("select"): works mostly as the left button on most modern GUIs

Middle ("menu"): Pop-up menu activation

Right ("adjust"): does a variation of the left button, for example selecting a file icon without deselecting the already selected, selecting a window for input focus without sending it to the front, selecting a menu item without closing the menu, and so on.

This is something I miss in Linux GUIs.

It also had something like TrueType (but before this and IMO better, since it used cubic splines instead of quadratic splines and allowed anti-aliasing) which allowed all apps to share the same fonts. Printing was done by rendering pages using the same engine as screen rendering, so it was truly WYSIWYG (unlike Mac at the time). The only disadvantage was slow printing, as everything was sent as a bitmap (though you could print raw text using the printer's built-in fonts). But it made installing new printers quite easy: You just had to describe the commands for starting graphics mode and how pixels are sent to the printer. I had a printer that was not in the list of already-supported printers, and it took me less than half an hour to get it running.

Heresy: Hare programming language an alternative to C

Torben Mogensen

300 languages??

Already in 1966, Peter Landin wrote a paper called "The Next 700 Programming Languages" (read it if you haven't), citing an estimate from 1965 that 1,700 programming languages were in use at the time. I seriously doubt that this number has declined to 300 by 2022. Rather, I expect there are at least 17,000 languages by now. Granted, some of these are domain-specific languages used exclusively in-house by some companies or educational institutions or experimental languages under development (I would categorize Hare as one of these). But even the number of languages used for serious development in least a double-digit number of places would still exceed 300.

ZX Spectrum, the 8-bit home computer that turned Europe on to PCs, is 40

Torben Mogensen

Re: "Rival machines, such as the Commodore 64, did not suffer from the same problem"

I came to the comments to correct this misunderstanding, but you beat me to it.

The BBC micro was one of the few among the 8-bit crowd to have separate colour information for every pixel. It did use more memory for similar screen resolution (when using more than two colours), but was much easier to use.

Take this $15m and make us some ultra-energy-efficient superconductor chips, scientists told

Torben Mogensen

The theoretical limit

The article says that they might be "approaching the theoretical limit of energy efficiency". I assume they mean the Landauer limit, which puts a lower bound on dissipation when performing operations that lose information. For example, an AND gate has two inputs and one output, so it loses a bit over one bit of information on average (9/16 bits to be precise). But it is possible to go under this limit if you don't lose information, i.e., if all operations are reversible. In this case, there is no lower limit. And you can do surprisingly much with only reversible operations -- though you sometimes need to output extra "garbage" bits in addition to the result bits to preserve reversibility.

Still, the Landauer limit is several orders of magnitude under the current energy use of semiconductor gates, which also dissipate more in wires than in gates, so superconducting wires and gates will reduce the power dissipation quite radically. As for cooling, this is proportional to power dissipation, so while superconducting chips need to be cold, they don't tend to heat their environment very much, so in a well-insulated container, the cost of keeping them cold may not be too bad.

Why the Linux desktop is the best desktop

Torben Mogensen

Re: Linux "Desktop"

"Something must be compelling for so many businesses to use Microsoft Windows"

Inertia, mostly. If a company has used Windows for 20+ years, they need a very good reason to change, since change costs. So it is not so much a compelling reason to use Windows, it is lack of a compelling reason to use Linux.

Also, many companies use other Microsoft software than just Office and Outlook. For example, Sharepoint and Microsoft Dynamics. Or they may use legacy systems that only run on Windows servers.

But, yes, for the average office worker, Linux is fine. But the server software is harder to migrate.

Said by someone who is a daily Linux user and hasn't used Windows for years (except when trying to help other people who have problems with their Windows machines).

Any fool can write a language: It takes compilers to save the world

Torben Mogensen

Regarding = and ==, I fully agree that assignment and equality should use different operators. But I much prefer the Algol/Pascal choice of using = for equality and := for assignment. = is a well-established symbol for mathematical equality, whereas mathematics generally doesn't use assignment to mutable variables, so there is no a priori best symbol for this. So why use the standard equality symbol for assignment and something else for equality? I suspect to save a few characters, since assignment in C is more common than equality testing. But that is not really a good reason.

Note that I'm fine with using = for definition of constants and such, because this implies mathematical equality (and = is used in maths for defining values of named entities). Context is needed to see whether = is used to declare equality or to test for equality, but the context is usually pretty clear.

Torben Mogensen

Re: I miss a critical note and some figures.

Web Assembly (WASM) is better defined than GCC and LLVM (it even has a formal specification), and it is simpler too, so it makes an easier target language than GCC or LLVM intermediate representations. Unfortunately, WASM is intended for running in browsers, so it is sandboxed and requires an external program to handle i/o and everything else you would normally get from an OS, so it is not suited for all purposes. Also, the existing implementations of WASM are not super fast.

Speaking browsers, several languages use JavaScript as target language, allowing them to run in browsers. So much effort has been put into making the travesty called JavaScript run fast in browsers, that these languages can have competitive performance. But since JavaScript is very ill defined, this is not a good solution.

The article also fails to mention JVM and .NET as compiler targets. While farther from silicon than both the GCC and LLVM intermediate languages, they can provide decent performance and (more importantly) access to large libraries (which, unfortunately, in both cases require you to support the object models of Java and C#, respectively).

And while I agree that you need parallelism for compute-intensive applications, not all applications are that, so there is still plenty of room for languages and compilers that do not support parallelism.

The wild world of non-C operating systems

Torben Mogensen

Re: ARM Holdings Instruction Set Bias

The first versions of RISC OS used no C whatsoever. IIRC, a C compiler only came later. ARTHUR, the predecessor to RISC OS, did indeed write parts of the GUI in BBC BASIC, but when RISC OS replaced it, the GUI was all assembly language.

As for C influence on the ARM ISA, I doubt there was any. Yes, there are autoincrement/decrement loads and stores, but I believe these were inspired more by PDP11 and VAX, and were more general than the C ++/-- operators. Essentially, after you calculate an address (which can include a scaled register offset), you can write that back to the base address register.

Torben Mogensen

Re: RISC OS

RISC OS is an impressive bit of software, but the fact that it is written in 32-bit ARM assembler (which was a somewhat sensible decision back in the 1980s, when it was written) makes it a dead end. It can't even run on the 64-bit-only ARM variants, which are becoming increasingly common.

AFAIR, RISC OS was not Acorn's first go at an OS for their ARM-based machines. There was a project written in (IIRC) Modula2, but that was far from ready by the time the first Archimedes computer hardware was finished, so Acorn made ARTHUR (some jokingly said it was short for "ARM OS by Thursday"), which took many elements from the OS from the BBC Micro (which was written in 6502 assembly language) and added a very limited GUI. This was after a year or so replaced by RISC OS, which was clearly an extension of the code used in ARTHUR. After another year (about 1990, IIRC), this was upgraded to RISC OS 2, which had many advanced features that would not appear in Windows until Win95, and some that haven't made it there yet. At the time I loved it, but in spite of an active fan base, it will not have a long-term prospect unless rewritten in a high-level language that can be compiled to multiple platforms. Rust would be ideal, but it is probably hard to retrofit RISC OS to the Rust memory model. Maybe it is better to write a new OS from scratch that takes the best elements of RISC OS and adds support for new stuff such as multicore processing and UNICODE.

'We will not rest until the periodic table is exhausted' says Intel CEO on quest to keep Moore's Law alive

Torben Mogensen

Re: "We will not rest until the periodic table is exhausted"

Does Itanium count as a transuranic heavy element?

Torben Mogensen

Re: "two advanced chip factories in Arizona"

That virus is probably called "tax breaks".

Torben Mogensen

Re: "progressing along a trend line to 1 trillion transistors per device by 2030"

The reason clock speed hasn't increased in the last decade is because, while Moore's Law continues to hold, Dennard Scaling does not. Dennard Scaling is the notion that energy use is roughly proportional to chip area, so as transistors shrink, you can fit more in the same area without using more power -- or you can reduce the chip area to a quarter and double the clock rate with the same power usage (power usage is roughly proportional to the square of the clock rate). But Dennard scaling stopped around 2006, so to get higher clock rate, you needed to increase power use (and heat dissipation).

As a consequence, the trend shifted from increasing clock rate (with quadratic power usage) to having more cores per chip (with linear power usage) and shutting down cores (and functional units) not currently in use. This is why your laptop fan goes crazy under heavy load -- all cores are in use.

So the problem is not really shrinking -- that gives relatively little. The problem is power use, and that has somewhat been addressed with FinFETs and specialised transistors, but that will only take you so far. Other materials may help, as will superconducting transistors (but these are fiendishly difficult to combine into a complex circuit, as they interfere heavily with each other). Other potential solutions are asynchronous processors (so you don't have a global clock) and reversible gates (which have a lower theoretical power usage than traditional irreversible gates such as NAND and NOR). But these have yet to be realised on large scale.

Java 17 arrives with long-term support: What's new, and is it falling behind Kotlin?

Torben Mogensen

Pattern matching is not a big deal???

I can't see why the author of the article doesn't think pattern matching is a big deal. I use pattern matching in almost everything I code, though this is mostly in functional languages (ML, F#, Haskell), where the support is better than in Java. For example, functional languages support nested patterns and pattern-matching over base types such as integers. Oh, well, I suppose this will eventually make it into Java. Most features of functional languages already have. Like Bob Harper once said: "Over time, every language evolves to look more and more like Standard ML".

Glasgow firm fined £150k after half a million nuisance calls, spoofing phone number, using false trading names

Torben Mogensen

Peanuts

£150K for 500K calls is 30p per call. That's not a lot -- I'm sure they paid the caller staff more than that per call. Spam callers should be fined a lot more, otherwise fines are just part of the budgeted expenses.

What's that hurtling down the Bifröst? Node-based network fun with Yggdrasil 0.4

Torben Mogensen

What's with the Ös?

It may seem a bit metal to add random umlauts over Os (I blame Motörhead). But in Nordic languages it does, in fact, change the pronunciation, unlike in English, where an umlaut indicates vowels being pronounced separately rather than as a diphthong (as in naïve). Bifrost and Ragnarok definitely have O sounds, not Ö sounds.

Realizing this is getting out of hand, Coq mulls new name for programming language

Torben Mogensen

*Bleep*

Given the sounds that cover up four-letter words on TV, how about using the name *Bleep*? I'm sure a suitable backronym can be found. "p" could obviously stand for "prover", and "l" could stand for "logic", but the people behind the language would probably prefer French words. Any suggestions?

Blessed are the cryptographers, labelling them criminal enablers is just foolish

Torben Mogensen

Are banks criminal?

When i use my online bank services, I believe (and seriously hope) that all traffic is strongly encrypted. Does that make the banks criminal? (O.k., they may be, but for other reasons). What about the VPN I need to use to access my work server when not on the local network? What about using https instead of http? And so on.

If governments do not want us to use crypto, they should show an example and stop using it themselves, making all documents and communication public. Like that's ever gonna happen.

Ah, you know what? Keep your crappy space station, we're gonna try to make our own, Russia tells world

Torben Mogensen

Unmanned space station == satellite?

An unmanned orbital space station is just a satellite by another name. "Not permanently manned" could mean anything from short-term maintenance crews every five years to almost always manned, but given that it is stated that the reason for not having permanent manning is radiation, my guess is that it is closer to the first. Higher radiation probably means inside the inner Van Allen belt, which is lower than ISS. This would lower the cost, but require more frequent boosting to maintain orbit.

What's this about a muon experiment potentially upending Standard Model of physics? We speak to one of the scientists involved

Torben Mogensen

Connection to new Dark Matter model?

Researchers at the University of Copenhagen recently released a theoretical study where they replace Dark Energy with adding magnetic-like properties to Dark Matter. It would be interesting (though highly unlikely) if the observed muon magnetic anomaly was related to this.

Where did the water go on Mars? Maybe it's right under our noses: Up to 99% may still be in planet's crust

Torben Mogensen

Not really surprising

As the article states, water on Earth is recycled by volcanic activity and would otherwise not be found in any great quantity on the surface. This has been known long, so it is not really a surprise that the lack of volcanic activity on Mars has contributed to its loss of liquid water.

What is new is that measurements of H20 vs. d2O can give a (very rough) estimate of how much is lost underground compared to lost to space.

In any case, for those who dream of terraforming Mars, its low gravity and lack of volcanic activity will make it hard to sustain a viable biosphere without having to replenish it forever. In spite of its current unfriendly environment, I think Venus is a better long-term option for terraforming: Blow away most of the current atmosphere and add water. Redirecting comets from the Kuiper belt to hit Venus will contribute to both. Sure, we are a long way from being able to do that, but in the long run, it will make more sense.

Memo to scientists. Looking for intelligent life? Have you tried checking for worlds with a lot of industrial pollution?

Torben Mogensen

What would it do us of good to build and send such a missile if the other side has already launched one (or will so so before our missile arrives)? At best, satisfaction when we all die that we will be revenged, but that is a poor comfort.

And, in the event that such a missile misses its mark or is intercepted, we will have made an enemy that might otherwise have been an ally.

Interstellar distances are so large that invasion of another civilized planet is unrealistic. We can destroy one, yes, but invasion assumes that there is something worthwhile left to invade. And the amount of war material that it is realistic to bring across interstellar distances will be relatively easily countered by the defender, even if their level of technology is lower -- as long as they have orbital capability. Added to that, invasion is only really worthwhile if the goal is colonization -- sending goods back to the mother planet is too expensive to be worth it -- and sending a large number of colonizers across interstellar space is unrealistic. This is why invasion SciFi postulate hypothetical technologies such as FTL flight.

It might make sense to colonize extrasolar planets that have biospheres but no civilization. You can send frozen fertilised eggs there and let them be raised by robots until they grow up. This will in no way help Earth, but it can ensure long-term survival of the human species.

PayPal says developer productivity jumped 30% during the COVID-19 plague

Torben Mogensen

Meetings

I'm sure the main reason is that developers didn't waste so much time on useless meetings. At zoom meetings, they can code in a another window and only pay attention to the meeting in the 5% of the time something useful is said.

Useful quantum computers will be impossible without error correction. Good thing these folks are working on it

Torben Mogensen

"All we have to do is put them together"

That must be the understatement of the decade. Problems arise in quantum computer exactly when you put elements together. Each element may perform predictably on its own, but when you put them together, chaos ensues.

Arm at 30: From Cambridge to the world, one plucky British startup changed everything

Torben Mogensen

Re: Depends on what you mean by "reduced"

"The *real* point of RISC was that it worked round the memory bandwidth problem."

That too, but mostly the load-store architecture prevented a single instruction from generating multiple TLB lookups and multiple page faults. On a Vax, a single instruction could (IIRC) touch up to four unrelated addresses, which each could require a TLB lookup and each cause a page fault. In this respect x86 isn't all bad, as most instructions only touch one address each (though they may both load from and store to this address).

On the original ARM, a load/store multiple registers could cross a page boundary, which actually caused faulty behaviour on early models.

A load-store architecture requires more registers, which is why ARM had 16 registers from the start, which x86 only got in the 64-but version. In retrospect, letting one register double as the PC (a trick they goy from PDP-11) was probably a mistake, as it made the pipeline visible, which gave complications when this was lengthened (as it was in the StrongARM).

Torben Mogensen

Re: Depends on what you mean by "reduced"

"As you can imagine, decoding an "instruction" is a lot harder if you don't know how many bytes it contains until you've already begun decoding the first part!"

Even worse, you can't begin decoding the next instruction until you have done a substantial part of the decoding of the current instruction (to determine its size). Decoding the next N instructions in parallel is easy if they are all the same size, but difficult if they are not. You basically have to assume that all byte borders can be the start of an instruction and start decoding at all these, throwing away a lot of work when you later discover that these were not actual instruction prefixes. This costs a lot of energy, which is a limiting factor in CPU, and getting more so over time.

You CAN design multi-length instructions without this problem, for example by letting each 32-bit word either hold two 16-bit instructions or a single 32-bit instruction, so you can decode at every 32-bit boundary in parallel. But this is not the case for x86 because it has grown by bits and pieces over time, so it is a complete mess. So you need to do speculative deconding, most of which is discarded.

Torben Mogensen

You can't use the number of transistors to measure RISC vs. CISC. The majority of transistors in modern CPUs are used for cache, branch prediction, and other things that don't depend on the size or complexity of the instruction set.

Torben Mogensen

Who killed MIPS?

The article states that Arm killed of its RISC rival MIPS. I do not believe this to be true. IMO, it was Intel's Itanium project that killed MIPS: Silicon Graphics, which at the time had the rights to MIPS, stopped development of this to join the Itanium bandwagon, long before any hardware was available. Hewlett-Packard (which had their own PA-RISC architecture) did the same, as did Compaq, who had recently acquired the Alpha architecture from DEC. So, effectively, Itanium killed three of the four dominant server RISC architectures (the fourth being Sun's SPARC architecture, that was later acquired by Oracle), and that was solely based on wildly optimistic claims about future performance made by Intel. MIPS continued to exist as an independent company for some years, but never regained its position. It was eventually open-sourced and used as the basis of some Chinese mobile-phone processors, but these were, indeed, swamped by Arm. Itanium didn't affect Arm much, except that Intel stopped producing their StrongArm (acquired from DEC) and the successor XScale.

So, while Itanium itself was a colossal failure, it actually helped Intel gain dominance in the server market -- with x86 -- as it had eliminated potential competitors in the server market. Now, it seems Arm is beginning to make inroads on this market.

The evolution of C#: Lead designer describes modernization journey, breaks it down about getting func-y

Torben Mogensen

Functional C# == F#

If you like the .NET platform and C#, but want something more functional, you could try F#. F# is sort of a merge of OCaml and C#, having most of the features of both, but works best if you program mostly in a functional style. You can use all .NET libraries, and there are some F#-specific libraries that better support a functional style.

There are some places where having to support both functional and OO styles make things a bit ugly (for example two different syntaxes for exactly the same thing), but overall it is not a bad language. Not as elegant as Standard ML, though.

Torben Mogensen

"You can't take anything away"

While it is traditional to not remove features from a language to ensure full backwards compatibility, there are ways around it:

- You can make a tool that will transform programs using the deleted feature into some that don't. This can require a bit of manual work, but not too much. Of course, fully automatic is best.

- You can remove the feature from all future compilers, but keep supporting the last compiler that has the feature (without adding new features to this).

- Warn that it will be removed in X years, and then remove it, in the meantime letting compilers warn that the feature will disappear. People will then have the choice between modifying their programs or use old, non-supported compilers once the feature is gone.

- You can introduce a completely new language that is only library-compatible with the old, let the old stay unchanged forever, and suggest people move to use the new language. This is sort of what Apple did with Swift to replace Objective C.

Third event in 3 months, Apple. There better be some Arm-powered Macs this time

Torben Mogensen

Emulation

It used to be that emulation caused a ×10 slowdown or thereabouts because the emulator had to decode every instruction before executing it. These days, emulation is more like JVM systems: You compile code on the fly into a code cache, optimising the compiled code if it is executed a lot (and this optimisation can be done on separate cores). This can keep the slowdown down to around 20% on average and almost nothing on programs with very small intensive compute kernels. On top of this, calls to the OS run natively, so you are unlikely to feel a significant slowdown. The cost is more memory use (for the code cache) and more heat generation (as you use otherwise idle cores for compilation and optimisation of code).

I can even imagine that Apple has tweaked the x86 code generation in their compilers to avoid code that is difficult to cross-compile to ARM, such as non-aligned memory accesses. This will only have marginal impact on x86 performance (it might actually improve it), but it can have a significant impact on the performance of the generated ARM code.

Amazon blasts past estimates, triples profits to $6.2bn but says COVID will cost it $4bn over the next quarter

Torben Mogensen

COVID?

Only the headline mentions COVID, the article itself just says "employee safety". That can, of course, include COVID measures, but it probably includes all sorts of other measures.

And COVID is likely to make more people do their Black Friday and Christmas shopping online, so it will probably gain Amazon more than the $4bn that they claim to spend on employee safety, including COVID measures.

As a "Carthago delenda est" line, I will add that I think Amazon should be forced to split into at least two independent companies: One for online sale and one for streaming videos. It would not be difficult to argue that Amazon uses its dominant position in online shopping to do unfair competition towards Netflix and other streaming services, by combining a free shipping membership with their streaming services.

What will you do with your Raspberry Pi 4 this week? RISC it for a biscuit perhaps?

Torben Mogensen

Dead end?

Much as I like RISC OS (I had an Archimedes and an Acorn A5000, and used RISC OS on a RPC emulator for a while), I think it is painting itself into a corner from which it can not escape. It is still mainly written in 32-bit in ARM assembly code, and the world is moving to 64 bits -- it is even becming common that ARM processors are 64-bit only. And it can only use one core, where even tiny systems these days are multicore. Cooperative multi-tasking is also rather dated. There were good reasons for these design decisions in the late 1980s, but they do not fit modern computing. MacOS had similar issues in the 1980s, but when they moved to a platform based on BSD (from Steve Jobs' NEXT project), most of these problems were solved. Acorn died before it could do similar changes to its own platform, and attempts at moving RISC OS to a more modern kernel have been half-hearted -- there was a RISC OS style desktop called ROX for Linux, but it mainly copied the look and feel of RISC OS and didn't go very deep. And nothing seems to have happened with it for a long time.

So, I can't really see RISC OS moving out of a hobbyist niche and into anything approaching mainstream. Not without a so complete rewrite that it is debatable that you can call the result RISC OS anymore. It might be better to port some of the interesting parts of RISC OS (some of the apps, the app-as-a-folder idea, and the font manager) to Linux and let the rest die.

Heads up: From 2022, all new top-end Arm Cortex-A CPU cores for phones, slabtops will be 64-bit-only, snub 32-bit

Torben Mogensen

Makes sense

I loved the old 32-bit instruction set when I had my Archimedes and A5000 home computers, but over time the instruction set accumulated so much baggage that it became a mess. So I'm fine with a 64-bit only ARM. Nearly all modern applications use 64-bit only, so support for the 32-bit ISA is just extra silicon area that could better be used for something else.

Sure, it is a drag that future Raspberry Pis will not be able to run RISC OS, as this is (still) mainly 32-bity assembly code. But RISC OS, for all its qualities, will not amount to anything other than a hobby system until it is ported to a high-level language (such as Rust) and made more secure. Even as a past user of RISC OS, it was not the OS core that I loved -- it was higher-level details such as the GUI, the built-in apps, the font manager, the file system (with file types and applications as folders), and the easy-to-use graphics system. These could well be ported to a more modern OS kernel.

Ah yes, Sony, that major player in the smartphone space, has a new flagship inbound: The Xperia 5 II

Torben Mogensen

Lenses

I seriously doubt the tiny lenses used in smartphones are precise enough for true 8K video. So you could probably do just as well with intelligent upscaling of 4K or lower.

0ops. 1,OOO-plus parking fine refunds ordered after drivers typed 'O' instead of '0'

Torben Mogensen

ABC80

In the 1980s there was a Swedish-made home computer called ABC80. On this computer, the pixel patterns for O and 0 were EXACTLY the same. Since O and 0 are close on a keyboard, this could give hard-to-find errors when programming in BASIC. Is this a variable called "O" or the number 0? It didn't help that the designers had the bright idea that distinguishing integer constants from floating point constants, you added a "%" at the end of integer constants (similar to how integer variables were suffixed in most BASICs at the time). So O% and 0% were both valid. Variable names could only be a single letter or a single letter followed by a single digit (and suffixed with % or $ to indicate integer or string variables). All in all, this was not hugely user friendly. The follow-up ABC800 added a dot in the centre of zero, but the BASIC was otherwise the same.

I was the happy owner of a BBC Micro, but I was briefly hired by a company to port some school software to ABC80. The way it operated on strings used huge amounts of memory, so I had to add a small machine-code routine to make in-place updates (insert char, delete char, replace char) in strings to keep it from running out of memory.

Page: