* Posts by thames

1150 publicly visible posts • joined 4 Sep 2014

Page:

Torvalds weighs in on 'nasty' Rust vs C for Linux debate

thames

Re: My understanding...

This is pretty much my understanding of it as well. Rust is not simply C with some stuff added, so code has to be written to interface between the two. Every time some C code is changed, Rust code which uses it may change, even if the function call interface is the same. The question is who is made to be responsible for those changes.

There was some other large C project which had some Rust dramatics a while ago where the Rust enthusiast had promised that his Rust bit of the project wouldn't affect anyone else. He was allowed to introduce Rust on those terms. However, he found the maintenance of his Rust code a major burden and so was trying to push that work off onto everyone else, under the guise of "he just needed them to provide information", when in reality he wanted them to maintain the Rust code so he could go on to write more new Rust code instead of spending all his time chasing a moving target with Rust. I think he wrote a small program to automatically re-write the Rust interfaces, but he wanted everyone else to be responsible for writing the necessary inputs to it, which meant they needed to do a lot of extra work when the conditions for introducing Rust had been otherwise.

There was massive push-back from everyone else, and the Rust enthusiast ended up bailing out of the entire project in a huff while blaming everyone else. Unfortunately I can't recall the name of the project at this time, although I think it was covered in El Reg.

One of the major problem with trying to use Rust in a C program is apparently that while Rust does offer interface options for working with C, you lose most of the supposed advantages of Rust if you make use of them, including many of the memory safety features. And once you do that, why bother with Rust?

I suspect that what is really needed is rather than having a completely new language is for someone to create a new variant of C, but without a lot of the baggage associated with C++ and just focuses on the major security pain points in C as experienced in the Linux kernel. This would allow for a gradual re-write of the kernel without the dislocation of using a completely new language.

After the BitKeeper fiasco Torvalds went away for a while and came back with Git. Perhaps he could do the same with C.

Desktop hypervisors are like buses: None for ages, then four at once

thames

Re: Hyper-V Type1

KVM can be type 1 or type 2, depending on how you use it. Type 1 is basically a hypervirsor with no underlying OS, while type 2 runs as an application managed by an OS. Type 1 hypervisors may not need a separate OS to run them, but that's because they have all the features of an OS built in already. What you are using it for and how you are using it determines which type you need.

KVM is a Linux kernel module, so if used in a minimal installation it can be seen as type 1, but if you install it on your regular Debian (or whatever) desktop OS, then it could be seen as type 2.

In reality its both and neither, it's a hypervisor built by people who didn't care what the marketing people wanted to label it as.

As mentioned in another post, I am in the midst of switching from using VirtualBox to using KVM as a desktop hypervisor, and have found the two to be more or less equivalent for my purposes (software testing). Because KVM is part of the Linux kernel (it's a kernel module), I expect that it will be much better maintained than VirtualBox has been under Oracle's care.

thames

Re: Hyper-V Type1

These are "desktop hypervisors", just as it says in the title. This fills a different use case than a server.

They're often used by for example software developers who want to test on different operating systems or OS versions without setting up separate hardware. You would fire up the VM, run your tests, and shut it down again. Most of the time you don't have a VM running at all.

Another use case is where you have one Windows program that you need to run occasionally on say Linux. In the old days you would dual-boot. These days you would run it in a VM, as and when you needed it.

In these use cases the minor efficiency differences between different hypervisor types is irrelevant in practical terms. Furthermore, modern hypervisors provide special virtualisation drivers which simply pass through I/O requests instead of emulating hardware, so there's very little efficiency lost there.

thames

I'm in the middle of replacing a Virtual Box set up for software testing on a desktop with KVM, and so far everything has been going well. All of the functionality that I used in Virtual Box is present in KVM, and KVM has some features that I am digging into to see if they make life easier than was the case with Virtual Box.

I had been procrastinating on this project for a while, but VB had become too unreliable (e.g. would stop working after an OS update and it would take a long time for Oracle to come out with a fix) so I finally found the motivation to do it. So far it has been going very smoothly and I'm kicking myself for not doing it sooner.

For software testing I have an automated system which starts the VM, uploads the software via SSH, compiles it, runs the tests and benchmarks, downloads the results, checks the results, and then goes on to the next OS target.

For this you need a command line interface to the VM so you can script the whole process. With VirtualBox this is "VBoxManage", with KVM this is "virsh". Everything that I used in VBoxManage has an equivalent in virsh that works more or less the same way. So "VBoxManage startvm <vmname>" and "virsh start <vmname>" do the same thing, etc. Some of the commands may be a bit different (I can't recall exactly), but not to a degree which would make any real difference in usability.

I don't have experience doing this with VMWare Workstation, so I'm a little surprised to read that they have apparently just introduced "vmcli". I was pretty sure they had something called vmrun which did the same thing, or was that something else, or did it just have limited functionality?

China claims Starlink signals can reveal stealth aircraft – and what that really means

thames

Re: Starlink? What about starlight?

This was an experiment to show that Starlink satellites in particular could potentially be used as sources of ambient radio signals for passive radar. If there are also other signal sources which could also be used, then that would have to be a different experiment.

thames

Re: I'm skeptical

Over-the-horizon systems like Jindalee are high frequency by OTHR standards, but they're still very low frequency by conventional radar standards. Since low frequency conventional radar can detect stealth aircraft just fine, it's theoretically possible that Jindalee can as well, although I haven't read anything which states that specifically.

Present day stealth doesn't work when the radar wavelength is a significant fraction of the size of the entire aircraft, as the whole aircraft becomes the reflector rather than individual parts of it. This is why supposedly "obsolete" radar systems which some countries which couldn't afford new ones have can detect stealth aircraft.

However, antennas get bigger as frequencies get lower, and you can't fit an antenna for such a radar into a missile to use it for conventional semi-active or active homing (where the missile uses the radar reflections directly to home in on the target). This is what is meant when people say they "can't be used for targeting".

However, military radars are not all used for targeting anyway. There are warning, search, and targeting radar systems, and they tend to operate on different frequencies and have different roles in the overall process.

The Jindalee system cannot be used for targeting against any aircraft, stealth or non-stealth, but that doesn't mean that it hasn't been something very useful for Australia to have in order to see whether something is coming their way from the northern direction.

Canada uses OTH radar as well, pointing off the east coast and has been developing one which can work in Arctic conditions. In the latter case interference from the Aurora Borealis has until now prevented OTH radar from working in the Arctic to detect Russian bombers coming over the north pole, so Canada operates a chain of conventional radar stations in the Arctic for that purpose (the North Warning System), and they are coming up as due for replacement. However, Arctic OTH is now apparently solvable with enough computer processing power and so an Arctic OTH radar is close to deployment.

The issue with using passive radar based on ambient radio signals has face the similar problem of requiring massive amounts of signal processing in order to make it work. This sort of signal processing is however now practical according to various reports. Lots of people are doing research into this area, and what these Chinese scientists have done is to show that Starlink satellite signal may be viable as one such source of ambient radio.

thames

Re: Cell phone tower signals

This was a US B2 bomber which was tracked by a British Rapier surface to air missile system as it flew over Farnborough Air Show. They detected and tracked it using their infra-red sensors built into the launch system. Because the missile was command guided it didn't need a radar return signal to home in if it had been launched.

In the version of the story that I read about the Rapier vendor (BAe originally, now MBDA) recorded the tracking incident and were playing it on continuous loop at their sales booth until the US made a big fuss with the air show hosts and had the latter make them stop.

Again, what I heard was that they "cheated" a bit because they knew where to look in the first place and so could point the missile launcher (which had the sensor system built in) in the right direction to pick it up.

And this is why a radar system that can't be used to provide terminal homing signals is still useful. If you can tell that something is there and give a rough location, you can then start working on the problem of find tuning the location using other sensors that have a much narrower field of view.

thames

Re: I'm skeptical

As mentioned by others, stealth mainly works by minimizing reflections back to the source, not by absorbing the signals. The angles are calculated to reflect the radar "pings" off in another direction.

Newer submarines such as the German 212 CD are starting to do the same thing for sonar. They have long used acoustic tiles to try to absorb signals, but this is much less effective than reflecting the sonar pings away in another direction (the new submarines of course will use both methods). These newer submarines use a diamond shaped outer hull instead of a cylindrical one conforming to the shape of the pressure hull.

The idea of using separate transmitters and receivers for radar in order to detect stealth planes is not new. This is called bistatic radar (an old idea which has become new again) and has been known about for years and has been tested with things like television and cell phone tower signals. They look for the "holes" in radio signals rather than looking for reflections.

What these Chinese scientists have done is shown a proof of concept that it can be done using Starlink signals. Other satellite constellations of course could be used as well.

I have to disagree with the author of the Reg story however with regards to whether this is useful. If you know that something is there and have a rough location and direction of a target, you can start to bring assets in such as fighters or drones to pin point the location and use other shorter ranged detection means for terminal homing. The big problem has been knowing whether there is anything there to find, and this area of research (and other people are working on the problem as well) tries to address that.

It's like the situation with over the horizon radar. It can't be used for targeting either, but that doesn't mean that it's not extremely useful. This is why many billions of dollars are still being spent on it today.

One other advantage of using satellite signals (or other bistatic radar systems using ambient signals) is that they are much harder to knock out because they don't transmit, they just receive. American air warfare doctrine for example places heavy emphasis on suppressing enemy air defences as the first step in any war, and a big part of this is finding and destroying all radar transmitters when they turn on. Using ambient radio transmissions for air warning throws a wrench into the works in this regards. This is the aspect of bistatic ambient radar that really has people interested.

US govt halts medical study into Havana Syndrome, cites 'coercion' of participants

thames

Canada already looked into it - it's due to poisoning from excessive fumigation for Zika

Canada has already looked into it and the neural damage is due to excessive fumigation due during the Zika virus outbreak.

Zika virus is a mosquito borne disease found in tropical areas and the variant which was going around at the time was causing serious birth defects (babies with tiny heads). It was a very hot topic at the time and the US response to this was to increase the frequency of fumigation in their embassy buildings and associated housing. They hired a fumigation company from the US (Florida) to do the work. Canada hired the same company and followed the same fumigation schedule.

The result however was that while this was effective at killing mosquitoes, employees were exposed to excessive levels of insecticide. The insecticide used is a neurotoxin and overexposure to it causes exactly the symptoms found. Testing of Canadian foreign affairs staff found the neurotoxin in question in their bodies. The evidence for this is pretty much as good as it can get.

In Canada it's considered case closed. Health care is free in Canada, so there's issues in that regards. I don't know if the employees received any compensation for long term health issues affecting quality of life.

In the US however, there seems to be a reluctance to admit what the problem is. I suspect that this may be due to fears of getting sued in what is a notoriously litigation society and fears of being held to blame and fired over the decision to fumigate that much.

There are also probably a few American diplomats who where not exposed to the insecticide but have convinced themselves that they have the same symptoms when they are in fact just having a psychosomatic reaction. The US diplomatic service is big enough that there will be a number of people in it who are prone to that sort of thing.

Anaconda puts the squeeze on data scientists now deemed to be terms-of-service violators

thames

Re: This is why we switched to conda-forge

It's called "virtual environments" with Python, and it's built into the standard version. It's one of the things that Python 3 introduced. Anaconda had it longer, but it's standard in regular Python now.

thames

Re: This is why we switched to conda-forge

I think it's mostly inertia keeping most users still using it. Anaconda originated when Windows and Mac OS users didn't have simple installation options like Linux users did with their integrated package management. Conda and the Anaconda repos gave Windows and Mac users a package manager and a set of packages that worked with it.

With PIP and Pypi (the official Python package manager and repo) now existing, most of the reasons for most users to use Anaconda have gone away. However, lots of existing users keep on using Anaconda because it's what they're used to rather than because they actually need to.

What Anaconda still offers is a managed service for companies whose IT departments don't want to give their employees free rein to install whatever they want from Pypi or, for Linux users, from their distro's repos. There may also be some packages that Anaconda builds that are built with different options than those available elsewhere.

If companies find that outsourcing to Anaconda is cheaper than doing it in house, or just don't want to give their employees permission to install from Pypi, then they should be prepared to pay for that service. I don't personally have a lot of sympathy for companies that want to outsource IT functions and then complain that the outsourcing service provider have the temerity to want to be paid for their services.

Two Russians sanctioned over cyberattacks on US critical infrastructure

thames

El Reg said: "For example, Mikhail Vasiliev, a 34-year-old former LockBit affiliate dual national of Canada and Russia, was arrested in 2022 after entering Canada on a trip – away from the Kremlin's protection."

Er - Vasiliev lived in Canada, having immigrated from Russia two decades prior to that. He was arrested at his home north of Toronto, and his home had been raided by police several months prior to that when they collected evidence against him. He was stupid enough however to keep at it though and was caught in the act the second time with the Lockbit login screen open on his laptop when they arrested him the second time. Most of the rest of the LockBit gang lived in Ukraine and had already been arrested there. Connections between LockBit's and Russia were rather tenuous at best.

Of course CARR and LockBit were complete amateurs at causing disruption compared to Crowdstrike.

Your next datacenter could be in the middle of nowhere

thames

If the best place to put a data centre in Australia is in a remote mining town due to problems with electric power supply, there's something seriously wrong with Australia's electric power development strategy.

Yes, ready availability of hydro-electric power played some part in the location of many US data centres in the states of Oregon and Washington. However, a bigger factor was that those states were the homes of Amazon and Microsoft, as well as being next to California and seeing a lot of spill-over development from the latter. Neither of those states are "remote areas" by any stretch of the imagination.

There are plenty of places in the world which have a good supply of clean energy without being remote mining towns. For example, you could locate your data centre in Vancouver or Montreal and take advantage of the abundant and cheap hydro-electric power there. You could locate it in Toronto and take advantage of the abundant nuclear power there. These considerations also apply to Norway (hydro-electric) in Europe.

If solar and wind are not reliable enough for data centres, then maybe what Australia really needs is nuclear power to provide reliable electricity for all customers.

Raspberry Pi OS airs out some fresh options for the summer

thames

A nit pick

As this is El Reg, I'll use the opportunity to nit pick on something irrelevant to the main point.

The Raspberry Pi OS 2024-07-04 release appeared on the USA's version of what is arguably the world's most widely celebrated holiday: independence from the UK, as celebrated in 65 countries around the world.

Er, no. If you follow the links back to the original source, it's based on a story about which countries are now independent from the UK, not which countries celebrate a day of independence. There's no "independence day" holiday in Canada, and I don't think there is one in Australia or New Zealand either (I can't speak with respect to other former colonies).

You won't even get agreement in Canada as to when we became independent, just a range of different opinions, depending on how you want to define "independence". The most common event given is the passing of the Statute of Westminster in 1931, which also covered Australia, New Zealand, South Africa, Ireland, and Newfoundland (which was not part of Canada at the time). Some people would say 1982 (patriation of the constitution), and some people would say that there's no specific date, just a gradual process over the first half of the 20th century which took place without anyone actually intending it to.

"Independence day" holidays are for dodgy third world countries.

This was a nice article though, and it prodded me out of my inertia to run upgrades on my Pis (3, 4, and 5).

Breaking the rules is in Big Tech's blood – now it's time to break the habit

thames

Re: I find it strange

This pretty much hits the nail right on the head. AI related copyright issues won't be a legal issue until AI has progressed to the point where it affects copyright holders with deep enough pockets.

I suspect that what may be the deciding factor is when generative AI gets good enough to be used to make commercially successful new video entertainment releases based on a supplied script. If anybody can just feed existing movie and TV libraries into an AI model, then give it a movie script, and then get an acceptable new movie out, existing copyright holders won't stand for it. If it is ruled as being legal, then the laws will be changed to prohibit it.

Software copyright is based on the same laws that protect books and movies. Software licenses are based on granting users permissions to use that software under those laws.

What may be needed is some sort of GPL equivalent which says that anyone using a (let's call it GPL V4) work for training must publish all related training material under the same or compatible license.

Most MIT licenses (which don't require attribution) probably won't require a change, as they implicitly give all permissions required for using as training data.

Apache will have to think about their license, as they fall in between MIT and GPL in terms of what they require (I.e. the patent clause).

How to escape VMware's pricey clutches with Virt-v2v

thames

I'm in the process of moving my test systems from VirtualBox to KVM on Ubuntu. There's no problems on the KVM side so far.

Gates-backed nuclear plant breaks ground without guarantee it'll have fuel

thames

Re: Begins to make sense...

Wind turbines are actually rather poor at load following as well. They suffer from two problems. One is that since they depend on wind speed you can't suddenly ask for more power out of them. They deliver whatever they can and that will vary unpredictably. The other is that because they are usually distributed over a large geographic area they are difficult to coordinate. Large distributed AC systems do not react instantaneously. There is an appreciable lag in response across a large grid which makes control more difficult as the geographic area increases.

If you have a grid which depends heavily on wind power, then you need a large amount of excess capacity operating at all times or you will have an unstable grid. Restarting a wind dominated grid after a black out is also still not a solved problem due to the difficulty of controlling its inherent instabilities.

Wind power on a large scale is probably best suited to places like Quebec which have very, very, large hydroelectric plants and reservoirs which can make up for wind's inherent technical deficiencies. Using gas turbines and coal for this purpose with wind as is done in most places is going to run into difficulties with gas and coal being phased out.

Solar power is a reasonable match up for peak air conditioning loads in hot sunny countries. However, it doesn't provide a base load capability as output varies by time of day, dropping to zero for much of the day. It would probably be best used as a supplement to nuclear in certain limited climactic regions. I don't know if they suffer from the same sort of distributed control instability problems that wind power does, as there seems to be much less literature published on it.

thames

Re: Why is the dirty word of nuclear energy...

If you had the technological ability to turn spent fuel from thorium into U233 bombs, you would simply build a uranium reactor and extract plutonium from there instead as it's much simpler to do that. U233 has been tested in bombs by several countries, found to be unsatisfactory, and abandoned.

If you are producing U233 from thorium, it will inevitably be contaminated with highly radioactive U232. U232 is difficult to handle, whereas bomb grade plutonium is much easier. It is the U232 contamination in the fuel which supposedly makes thorium unattractive as a source of bomb material because it makes extracted uranium difficult to handle from a practical standpoint (e.g. high precision machining of the bomb segments). This is aside from U233 being a less desirable bomb isotope to begin with.

As mentioned above in another post, CANDU style reactors can use thorium, and they are a very safe and well proven technology operating on a large scale for decades. They keep the lights on where I am right now.

However, practical thorium fuel cycles require at least a few uranium cycle reactors to provide the plutonium to mix in with the thorium fuel. They need about 5% plutonium in the fuel mix. This would be reactor grade plutonium, which is a different isotope mixture than weapons grade plutonium. The presence of too much Pu-240 makes it unsuitable for use in bombs. Military reactors producing Pu-239 for bombs have to cycle their fuel through quickly in order to avoid build-up of Pu-240.

Plutonium which contains 19% or more Pu-240 is considered to be "civil plutonium" and not bomb material. It is not feasible to separate Pu-240 from Pu-239.

UK Magnox reactors were built to product Pu-239 with electricity as a byproduct. This meant that reactor fuel was cycled through quickly. This is not normally desirable from a commercial perspective, as it results in low fuel burn-up rates. It does however mean that if you want to monitor whether someone is making bombs, you keep track of their fuel usage.

Reactor grade plutonium is currently used today as a component in mixed-oxide reactor fuels in uranium reactors as a substitute for U235. CANDU reactors can and have run entirely on plutonium-uranium MOX fuel. US light water reactors can run on it, but I don't know if they have done so on a commercial scale. French reactors use MOX as about 30% of their fuel load.

Practical Thorium reactors would use thorium-plutonium MOX fuel, or possibly thorium-U233 MOX fuel.

It is not reactor technology which has been holding back thorium fuel cycles. It's simply the economics favour the cheaper and simpler conventional uranium fuels. Until such time as uranium becomes in short supply and the price becomes much higher, there is no economic incentive to use thorium.

thames

Re: Begins to make sense...

You're confusing dispatchability and load following. Dispatchable means the grid manager can ask for another 500 MW starting at 5:00pm to meet predicated load and the generating plant can guarantee to deliver it regardless of external conditions.

Load following means how well a generating plant adjusts to actual load on a second by second basis.

Nuclear is about as dispatchable as it is possible to get. Wind and solar in contrast are not dispatchable at all as their output depends on highly variable factors outside of their control such as whether the wind and the weather. They deliver when and if they can, on an opportunistic basis as a supplement to coal and natural gas.

When used in a grid which has both nuclear and gas turbine generating plants, the nuclear plants are typically used to provide the base load because they have very low fuel costs. The gas turbine plants are used for peaking (turned on at periods of high demand) because they have high fuel costs but low capital costs.

Hydro electric has very good load following characteristics, so it's often used in that role as well as for providing base load. Many countries have built pumped storage systems specifically to provide better load following characteristics.

Very large battery systems will fill the same role as pumped storage without requiring favourable geography for reservoirs and the like. They are also ideally suited for use with nuclear generating plants for daily peaking. The nuclear plants can provide the constant base load, while the battery systems can provide the daily peaks, re-charging from the nuclear plants at periods of low demand.

Battery systems however can't somehow turn a non-dispatchable power source into a dispatchable one. Size and cost factors mean that practical battery systems can cover short term daily peaks, but can't cover for extended periods of unfavourable weather.

thames

Re: Why is the dirty word of nuclear energy...

Canadian CANDU reactors can use thorium. However, uranium is so cheap and abundant that it's not worth while economically to use more expensive thorium fuel mixtures.

CANDU reactors currently run on natural uranium - no enrichment needed - so enrichment capacity shortages are irrelevant because it doesn't use any. They have been running on a large scale around the world for decades with an excellent safety and reliability record, so they are a very well proven technology.

Uranium fuel will self start. If you are using a natural uranium reactor such as CANDU then fabricating fuel elements from purified natural uranium oxide is relatively inexpensive.

Thorium however needs some sort of "seed" material such as plutonium to get the reaction started as it is fertile rather than fissile (you need to transmute it to U233 in the reactor before "burning" it). This means you need to set up fuel reprocessing infrastructure to extract the plutonium from uranium cycle fuel and mix it in with thorium fuel into a mixed oxide fuel. It's theoretically possible to breed more U233 than is burned and use that instead of plutonium, but that's a bit iffy and the breeding rate is slow at best.

An EC6 CANDU or ACR-1000 could be started with thorium fuel with 5% reactor grade plutonium (this is a different isotope than is used in bombs). The plutonium fuel bundles can then be gradually replaced (the reactors are refuelled while on-line) with recycled U233 from spent fuel.

Generally though, to operate thorium cycle reactors you need at least a few uranium fuelled ones to produce plutonium to start the reaction. Studies on using uranium enriched to about 20% as a fissile driver show the concept isn't economic.

Practical studies on thorium fuel require a mixed fleet of reactors. Spent fuel form Uranium cycle reactors is reprocessed and mixed with thorium and then "burned" in the thorium cycle reactors.

Indian reactors (CANDU derivatives and later developments) have long used some thorium fuel bundles in some fuel channels as it gives better operating characteristics. Using thorium on a large scale however will require breeding enough plutonium in uranium cycle reactors to make mixed oxide fuel.

Canada, South Korea, and China have all done extensive research and testing on using thorium in CANDU reactors (all three operate them). It is technically feasible and practical. However, uranium is currently so cheap that it isn't economically worth while compared to the more complex and expensive thorium-plutonium fuel mixtures.

Why RISC-V must get its messaging right on open standard vs open source

thames

Re: Sanctionable

RISC-V will be like x86 then, a collection of CPUs which are incompatible beyond a common base instruction set. Have a look at the absolute train wreck that SIMD is on x86. There are numerous incompatible extensions, with Intel being the worst offender as they attempt to segment the market in order to extract the maximum revenue from each perceived category of users. It's marketing department driven design rather than technology limitations.

RISC-V will be the same. There will be a core instruction set with vendor specific extensions. Nobody will use the vendor specific extensions except for people who have a specific use case.

Arm does the same using proprietary function units. This doesn't stop most software from running on Arm chips though, as they just stick to the core instruction set.

Mitsubishi Heavy Industries bets big on small turbines for datacenters

thames

Re: Even if the conversion isn't super efficient

The idea is that natural gas will be phased out and replaced by ammonia. There will be no natural gas available to act as a backup.

Natural gas requires significant capital investment in exploration, production, processing, transport, and distribution. Once natural gas consumption falls below a certain level, that whole supply chain goes away and it won't come back. For example, a single LNG export plant typically costs $40 billion. Nobody is going to make those sorts of investments in order for them to sit idle just in case they are needed now and again. The same goes for pipelines.

You also can't have LNG sitting in a tank indefinitely, as it continually boils off. You either need to be using it all the time, or it will be lost anyway (and methane leaks are worse than C02 emissions in terms of greenhouse gas effects).

You could try storing very large amounts of liquid ammonia, but that is a very dangerous material to keep around in quantity. It's rated as "VERY TOXIC, can cause death" by Canadian authorities. People die from ammonia leaks on a regular basis, and that's just with small scale use such as in refrigeration. A medium size ammonia leak at a food processing plant in Senegal killed over 100 people and injured over 1,000.

If you are going to keep ammonia in tanks at a data centre as a backup source, then the hazardous material aspects of this will be interesting, to say the least. I can't see those data centres being too popular with their neighbours, assuming that they get permission to build at all.

If you are dealing with ammonia on an industrial scale, then the risks can be mitigated by siting the plant in a remote area with fewer people around and a large exclusion area around it. That however is not a great solution for data centres.

The realistic answer for data centres is to locate somewhere which has very reliable utility power. For most countries that implies nuclear power once natural gas and coal plants have shut down.

thames

Germany are planning on importing ammonia from Canada. The ammonia will be made from power from offshore wind turbines off the coast of Newfoundland. Newfoundland have a big offshore oil industry, and they see ammonia from wind as a long term industry to replace oil when the oil fields dry up. The island of Newfoundland is relatively isolated so there aren't good alternative markets for the wind power electricity otherwise. The German PM was in Canada last year signing deals with his Canadian counterparts on this subject, so it has high level support in Germany.

Ammonia (NH3) can be turned into hydrogen relatively easily, or it can be burned directly in turbines. Mitsubishi have outlined both options in the above article.

Personally I'm not convinced that ammonia is a great idea as a synthetic energy material. A better solution for countries like Germany or Japan would be nuclear power, but there are political roadblocks in that respect. There's not much support for using ammonia for electric power in Canada itself, they're very big on nuclear being the future. They're happy enough to sell ammonia to others though.

MIT professor hoses down predictions AI will put a rocket under the economy

thames

Call centres

I suspect that a major practical application of AI will be replacing call centre workers with AI bots. This would be especially prevalent in industries where the call centres are mainly dealing with consumer complaints about products and returns rather than generating revenue.

America will make at least quarter of advanced chips in 2032, compared to China’s 2%

thames

It looks the main US targets are Taiwan and South Korea.

Despite all the talk of China, it looks like projected US growth is primarily at the expense of Taiwan and South Korea. I'm not sure those two countries are just going to sit there and take it, or whether they will come out with a response of their own.

Projected US growth is almost entirely in two areas, DRAM and logic < 10nm. With all the subsidies being thrown at the industry, I won't be surprised to see massive over capacity and bankruptcies in the DRAM business making those projections obsolete.

As for US growth in logic < 10nm, that seems to depend heavily on Intel being successful in making chips for third parties, something they haven't demonstrated much luck with so far. Given the fading relevance of x86, I wouldn't be willing to bet money on Intel still being in the chip making business in 2032, let alone a power there. They may very well decide to go fabless to avoid the high capital expenditure and associated financial risk.

The winners in 10 years time may be the countries that avoided throwing massive subsidies at the industry.

Microsoft really does not want Windows 11 running on ancient PCs

thames

Ah POPCNT! I've run into that problem before with Linux distros

I have an old Mini-ITX PC with a 32 bit VIA CPU which I had resuscitated as a testing platform a few years ago. I had difficulty finding a Linux distro that would run on it. Most major Linux distros had dropped 32 bit altogether, and Debian 32 bit would refuse to install because the CPU lacked the POPCNT instruction. Instead I would get an error message complaining about the lack of that instruction.

Eventually I found a distro that would run without POPCNT, Alpine Linux. I used that for a while on the hardware before eventually retiring it due to what appears to be increasing age related hardware problems.

Most Linux distros abandoned 32 bit altogether some years ago. Many of the ones that still have 32 bit versions won't run on old 32 bit CPUs because they require instructions such as POPCNT. Alpine is specifically built for embedded use, where old designs of 32 bit x86 hardware still find a limited market.

The reasons that most Linux distros have dropped 32 bit x86 is that the application software writers have dropped it in their apps, so there's no 32 bit specific security support even if the application still technically compiles for 32 bit. Without security support, the distros don't want to ship these apps, and offering a full set of supported OS with all major applications from a single supplier is the raison d'être of most major Linux distros.

Valkey publishes release candidate and attracts new backer

thames

A piece of a bigger pie is better than no pie.

I suspect that the long term result of this will be to make Valkey ubiquitous and used in applications where nobody was using it before. I've experimented with using Redis in an industrial automation application (too involved to explain here), but I've always faced the problem that with just the Redis company backing it, there was no guaranty that it would still be around years from now. I imagine there are other people in the same boat.

Redis the company should accept that a smaller share of a much bigger pie can still mean having a bigger piece of pie than trying to keep the whole tart to yourself.

Novelty flip phone strips out almost every feature possible to be as boring as possible

thames

The camera would be good for taking pictures of things that you need to make a note of later, like signs or public notices, business cards, hand written notes, or something you saw on a store shelf. It's easier to take a photo of it than to remember to carry a pencil and notepad around for it. The camera and screen are more than good enough for that. I have a phone with equivalent specs to this and the camera is very handy. I also use the calculator and calendar (just shows dates) occasionally.

When my nieces first saw it they were astonished that phone technology had advanced to the point where it was possible to make a phone that was so much smaller than the smartphones they had.

Japan may join UK/US/Australia defense-oriented AI and quantum alliance

thames

Re: Circles...

I ain't Spartacus said: "It might work for Canada, they apparently asked to join the UK nuclear sub program back in the late 70s / early 80s - but the costs are huge."

Canada tried to buy 10 to 12 nuclear subs from the UK in the 1980s, but the US vetoed the deal. The US had a veto because of long standing technology licensing agreements between Westinghouse in the US and the UK reactor builder (I can't recall which UK company that was at the time).

The sub deal was signed off at the highest levels by Thatcher, Mulroney, and Reagan. However it required a separate treaty between Canada and the US covering nuclear issues and this is where the US started throwing roadblocks in the way of it, hoping that Canada would get the message and give up. I suspect that the US didn't oppose the deal openly because they didn't want to offend Thatcher, as the latter was keen on the benefits to the UK of this deal.

American objections to the deal mainly revolved around territorial disputes with Canada in the Arctic. The US simply didn't want Canada to have nuclear submarines as they would allow Canada to more effectively assert its sovereignty in its territorial seas in the Arctic where the US navy wanted to be able to treat Canadian waters as if they were American waters.

Years later one of the American officials directly involved in torpedoing the deal wrote about it, and saw it as a big success for the US. Canadian PM Mulroney also gave the Canadian side of the story years after retiring from politics. He said that Canada eventually got the message that Canadian nuclear subs were very unwelcome so far as the US were concerned, so Canada gave up on it. When the project was cancelled various things such as costs were cited as being the reason as giving the real reason would provoke a diplomatic crisis within NATO.

Canada still needed replacements for its Oberons though, and needed them right away, so they bought four slightly used Upholder class subs from the UK instead.

Canada is currently in the market for up to 12 new subs to replace the Upholder/Victorias, and there's already stories in the US press about how Canada must not be allowed to have nuclear submarines and so must be kept out of AUKUS. I don't know to what degree this reflects official policy in the US though.

thames

I ain't Spartacus said: "I tried to find a good article covering everything, but there's been so much speculation that I can't."

The deal was put together in a hurry at a high level when the UK PM saw an opportunity and jumped on it. So far as I am aware, a lot of the details are simply not worked out yet. The reactors for Australia are already ordered from Rolls Royce though, so that part is nailed down.

I ain't Spartacus said: "However I did learn that the Virginia Payload Module isn't a whole block, it's a module that will have to be integrated, so that was wrong in my above post. No whole blocks (rings) from the US."

Vertical launch systems for missiles of that size are normally modular, including on surface ships. Australia would simply build the hull section with a large rectangular empty space for the launch system and then weld it in when it's shipped from the US. It would be very difficult for the US to build the hull section itself, as they wouldn't have the tooling for building it. Modern naval ships are built in sections with piping and wiring and everything else already in place, and then welded together. Everything has to line up very precisely.

I ain't Spartacus said: "There is more flexibility in the SSN-AUKUS program, and so there may be a way to deal with the workshare issues that keeps the Aussies happy in a way that the French couldn't."

With the Attack class, Australia were trying to heavily customize an existing French design. This inevitably resulted in rapidly escalating costs and receding completion dates. When the project reached the next approval gate, the Australian cabinet got cold feet and pulled the plug.

After that happened Australia started revisiting the question of nuclear versus non-nuclear from the start and this is where the UK jumped in. The UK Astute replacement was just starting to go on the drawing board and the UK were keen enough on export sales so Australian requirements could be designed in from the start rather than added on later.

I ain't Spartacus said: "Also the MoD have announced that the SSN-AUKUS boat is going to have a common combat system - and the Australians want US systems as it's the US they inter-operate with and that's what they have on their current fleet."

One of the Australian requirements for the French designed subs was for them to be fitted with an updated version of the CMS (Combat Management System) designed by Australia for the Collins class subs. This uses a lot of American hardware, but the software and integration was Australian. Originally the complete CMS for the Collins class was to be provided by an America company, but the latter botched the job so badly that they eventually had to be kicked out of the Collins project (Canada had similar problems with fitting American combat systems to the Upholder/Victoria subs it bought from the UK, although that was eventually made to work).

Australia then designed its own CMS using Australian software and integration engineering, but using a lot of American hardware. As Australia had sunk so much money into their own submarine CMS and had continued to pump money into the organization responsible for it on further development, it was a firm requirement that any new subs had to use it as well. The Australians considered this CMS to be one of their defence industry crown jewels.

It's not clear from current press reports whether the common CMS for the AUKUS subs is the Australian designed one which uses a lot of American hardware, or is in fact just the complete American system. If it's the former, then a lot of the Australian content requirements will be met by the UK using the Australian CMS in their own boats. If it's using the complete American system, then this is a huge reverse course for Australia.

However, none of the press reports that I've read seem to understand the history of the CMS issue and so don't make a clear enough description of what the intention is so that we can see what is actually going on.

The issue of nuclear propulsion always ran into the stumbling block of high costs and whether it would suck up so much of the defence budget that not enough money would be left for the rest of the Australian defence forces. The advantage of nuclear for Australia was always in terms of reduced transit time between the main naval bases and the primary patrol areas on the opposite side of the continent. Once on patrol AIP subs are quieter and so more "stealthy" and so are better in that regards. Reduced transit time though would let Australia buy fewer submarines.

thames

The Australian boats will be built in Australia, with the reactor coming from the UK, possibly as a complete reactor compartment hull section.

The US will sell or lease Australia 3 second hand Virginia class submarines which are near their end of life as interim subs while waiting for the new AUKUS subs. This will help Australia get a head start on training and getting experience so they will be ready for the AUKUS submarines when they do arrive. When the AUKUS subs arrive, the ex-Virginia's will be returned to the US and scrapped.

The AUKUS subs are just the design the UK were working on as Astute replacements, with Australia giving some input as to their requirements. The Astute replacements were going to use the Dreadnoughts as a starting point to save on engineering costs and for parts commonality.

The number one reason the Attack class subs being designed for Australia by France (Naval Group) was cancelled was because too much of the submarine was to be built in France and not enough in Australia and the Australian defence industry was turning up the heat on the government over this. The second issue was rapidly escalating as the original French design was "Australianized" at the request of Australia.

Industrial robots make people feel worse about jobs and themselves

thames

The study would probably be better entitled "how to ask a group of people to fill out surveys about opinions on vague and waffly concepts and then torture the resulting data until some very tiny result emerges".

Malicious xz backdoor reveals fragility of open source

thames

Re: Scary

StrangerHereMyself said: "For security critical projects and its dependencies maybe government intelligence agencies should be queried to establish the identity of someone requesting access. "

Er, some government intelligence agency is at the head of the list of suspects in this case. I'm not sure what asking them if one of their employees or contractors should be trusted is supposed to accomplish.

thames

Re: Would This Have Been Caught Sooner In Proprietary Software?

Microsoft, Google, Apple, etc. all make extensive use of outsourcing for their products. Those outsourcers in turn outsource to yet more subcontractor. All you have to do to get a backdoor into one of those is to either pay off the managers or just buy the subcontractor outright.

The US have a history of backdooring proprietary encryption systems by simply paying off the right people or becoming investors via a shell company. It would be naive to presume that nobody else has ever thought of doing the same.

thames

Re: Almost certainly fake names

A bit of googling turns up multiple people named Jia Tan in Canada, Singapore, and the UK, including two different professors at the same university (Cambridge).

Given the effort put into this project though, it's very unlikely that "Jia Tan" is his real name. If as suspected this is a professionally done job, then there are possibly multiple people involved in writing it and getting it accepted, and several different people could have been "Jia Tan" (and Jigar Kumar, and Dennis Ens, and Hans Jansen) at various times. Whomever was behind it isn't going to risk having a multi-year project go down the drain just because the original person pretending to be "Jia Tan" changed jobs.

Also, the person who wrote the malicious code likely has a professional background in writing malware, and his real name in that field may be known, or become known, putting the xz backdoor at risk if someone recognized it. It's much safer just to use a fake name that is difficult to trace.

As for where the name came from, a possible way of getting fake names is to just copy lists of staff names from a variety of major universities in the UK, US, Canada, etc., and pick some names at random. Then google each of those names to see if other hits come up so you know that you didn't pick a unique name.

thames

Almost certainly fake names

According to the linked blog written by Evan Boehs, the following names are all associated with this backdoor:

  • Jia Tan
  • Jigar Kumar
  • Dennis Ens
  • Hans Jansen

Almost certainly all are simply made up names which give us no clue as to the origin of this (as noted in the story).

As for whether this is a problem unique to open source, the same thing can happen with proprietary software by the simple expedient of buying proprietary libraries off the original vendor and then the new owner adding the necessary code. There's little chance of being discovered either, because the source code is not open to inspection by third parties.

I believe the US has also backdoored proprietary encryption systems several times by simply paying the vendor to do so or by becoming a major investor in the company and then putting their people in charge (this happened at least once, if not twice with companies headquartered in Switzerland).

Given the extent of outsourcing used in the software industry and the world wide nature of software development, what we have seen in the case outlined in the story is probably the best that can be hoped for.

What we need is awareness that problems like this can happen so that when fishy looking things can happen suspicions can be raised. Long term involvement in a project isn't a guarantee of trustworthiness either, as you never really know how someone will react if someone else waves enough money in his face.

PostgreSQL pioneer's latest brainchild promises time travel to dodge ransomware

thames

Re: Much as I'm a fan of PostgreSQL..

The bigger issue is that the "industrial and manufacturing companies" mentioned as being heavily affected by this problem are particularly concerned about their SCADA and other similar systems being taken down by ransomware.

These pretty much all use software from major automation vendors which run on MS Windows systems only and so are vulnerable to bog standard Windows viruses, including ransomware associated ones. There is usually a database involved, nearly always MS SQL Server. Porting them to another OS means a complete ground-up replacement of the whole application stack, which is a non-trivial undertaking.

Most of these installations, particularly those in factories, are not really suited to moving them to any sort of off-site cloud or data centre, as they rely on frequent low-latency polling of associated industrial hardware. Stuff involving utilities such as water or pipelines might be different due to things just inherently working a lot more slowly in those applications, but high speed assembly lines are far and away the most common industrial application.

A more promising approach to this particular problem for the mentioned "industrial and manufacturing companies" might be to move the existing applications and their associated Windows OS to VMs hosted on another OS. All the application related files and databases could be somehow written through to the underlying host OS instead of being hosted on Windows. Any transaction oriented file storage problem would then be isolated to just those application files necessary. Restoring would involving restarting the VM from a clean image and rolling the file transaction back to a known good point. I'll leave recovering the data from after the roll-back point as an exercise for the reader.

Of course this simply moves the goal-posts from attacks using bog-standard Windows viruses to attacks targeted specifically at the OS hosting the Windows VM. However, it would seem to provide the same sort of solution as DBOS would in this specific application, without replacing the entire application stack and without moving anything to "the cloud".

thames

Re: Why the cloud?

It's a minimal Linux OS optimized for running on cloud services. Stonebraker is simply suggesting that the way it works may have some inherent advantages when dealing with ransomware. It wasn't created specifically to deal with ransomware.

thames

A bit of googling turns up that DBOS is a minimal Linux OS using FoundationDB as its file system. It's designed to run in VMs on cloud hosting services, and so doesn't include all the drivers for dealing with hardware directly. Those cloud hosting services of course all run on Linux.

There are of course a number of other "cloud native" operating systems which are minimal Linux systems designed to run only in VMs. DBOS's defining characteristic is that it uses it's FoundationDB distributed database for a lot of functions that would otherwise use a regular file system.

China encouraged armed offensive against Myanmar government to protest proliferation of online scams

thames

Re: Old news?

I seem to recall ICG being mentioned one time previously on El Reg, with respect to them complaining about China's efforts to crack down on international organized crime gang involvement in Myanmar and Cambodia. It was the same problem as in this article (foreign workers being held in slave-like conditions), just addressed from the angle of the Chinese government going after the parts of the criminal gangs who had assets in China.

The Chinese police were issuing orders to Chinese citizens operating in Myanmar and Cambodia and who had apparent links to dubious operations there to return home and report on the source of their wealth. If they failed to do so, their assets in China would be seized and family members would have their government benefits cut off. ICG were wailing about how awful and unfair this was while obfuscating the background of what it was they were complaining about.

While this current story sounds like an interesting one, I'm not personally inclined to take ICG's portrayal of things as giving a full and complete picture of what is actually going on.

Venturing beyond the default OS on Raspberry Pi 5

thames

Re: Raspberry Pi Imager with Other OS

You're using a lot of words to simply object to me calling the Raspberry Pi OS GUI "cut down". It's based on a modified LXDE, the latter of which is specifically designed for low RAM usage. This was selected in order to run adequately on a 1GB Pi.

I made no mention of Xfce, Budgie, MATE, or KDE, because I haven't tried any of those on a Raspberry Pi. I can't speak about Xfce as I'm not as familiar with it, but Budgie, MATE, or KDE are not designed for low RAM usage so I don't know why you would think I was even referring to them by implication. They are all relatively similar to Gnome in terms of features and size.

If you like any of those better than Gnome, then that's fine. The point which was being made is that if anyone is using an 8GB Pi (I haven't tried a 4GB model), then there is no technical reason why an OS which uses Gnome can't be on the list of options. The Pi seems to have the horsepower to run it just fine.

Someone else will have to speak with respect to Xfce, Budgie, MATE, or KDE, because I haven't tried any of them on my Raspberry Pi hardware. Someone else again will have to speak with respect to the 4GB models, as I don't have one and so haven't tested them.

My previous comments have been based on my experience running Ubuntu on a Pi4. I recently acquired a Pi5. After my previous post on this subject I was inspired to install Ubuntu 23.10 on a spare SD card (using the Raspberry Pi imager) and try it in the Pi5. It ran fine, with good graphics performance running the standard Ubuntu version of Gnome.

I also tried playing a Youtube video on full screen (1920 x 1080) and it ran just fine, with no glitches or visible dropped frames.

To reiterate my point, whatever Linux OS you decide to run, the Raspberry Pi 5 with 8GB of RAM will likely have more than enough CPU and GPU performance to run nearly any mainstream Linux distro which has a Raspberry Pi ARM version, and it will be indistinguishable from running it on an x86 PC.

thames

Raspberry Pi Imager with Other OS

With regards using the Raspberry Pi imager with other operating systems, I've used it with Ubuntu on a Pi4 for a number of years no problems at all. You just pick Ubuntu as the OS you want to install and it will download and install it on the card for you. About a dozen different versions of Ubuntu are available for the Pi, and they were one of the very earliest of alternative OSes available for it.

The imager also offers Apertis (some sort of Debian based embedded OS) and Risc OS Pi. These are in addition to the various versions of Raspberry Pi OS and application specific OSs (games, media, etc.).

I've used Ubuntu for some years as a server OS for testing ARM software that I have written. It's always worked just fine. I originally picked Ubuntu because there wasn't an official 64 bit version of Raspberry Pi OS at that time, but there was one for Ubuntu.

I also have a spare SD card set up with a desktop version of Ubuntu that I used for a backup in case my PC died. During the pandemic I relied on it for a few days while waiting for a spare part for my PC, and found that a Pi4 was adequate for normal desktop use provided I wasn't trying to watch full screen video over Youtube. The latter issue appears to have been due to GPU limitations on the Pi4. Ubuntu desktop (Gnome) apps though worked just fine, and I didn't notice any difference from using an x86 PC aside from that the Pi booted up much faster than any PC that I've seen.

If your Pi has 8GB of RAM (I haven't tested with 4GB), I don't see any technical reason to use a cut down GUI as opposed to standard (Gnome based) Ubuntu GUI. The Raspberry Pi OS doesn't seem any faster than standard Ubuntu in terms of GUI performance. Of course if you want to play around with other things then there's nothing wrong with that.

Year of Linux on the desktop creeps closer as market share rises a little

thames

We've been through this before

I can recall when WordPerfect and Lotus 123 were the corporate standards on every business desktop and most office workers where I am had never heard of let alone seen Microsoft Word or Excel. We used email and calendaring software from somebody else whom I can't recall at this point, and project management software from some other company whom I also can't recall. Accounting had a set of custom Lotus 123 macros which were deeply knitted into their accounting process.

However, the large manufacturing company that I worked for at the time got sold from one multinational to another multinational, and the new owners had a blanket license deal with Microsoft covering a wide range of products, including everything on the corporate "standard desktop". About a year after the company changed hands the edict came down from above that we were all to change to the new standard for cost saving reasons. The new bundle from Microsoft was slightly cheaper to license than the old bundle from a collection of other vendors.

The general opinion of the users was that all the new stuff was worse than the old stuff and nobody knew how to use the new stuff. The new stuff from Microsoft was slow, buggy, hard to use, and didn't have all the functionality of the old stuff.

However, none of that mattered. The new stuff was deemed to be cheaper as a bundle, so we were changing whether the users liked it or not (not that anybody actually asked us of course). This shouldn't be too surprising, as it's no different from how any other business decision was made.

Training was not an issue. If you needed training you were given the phone number of a company which did training and you could go get yourself trained on your own time (the company would pay for it though). If you couldn't get use to the new stuff, well, you could be easily replaced.

Your work quota of course would not be reduced by one iota during the transition. Just work longer (for no extra pay) if you were having any trouble with the transition.

Accounting depended heavily on Lotus 123 macros. They got told "sucks to be you" and were given the phone number of a local consultant whom they could hire to rewrite their macros to work with MS Excel.

I don't know how hard it was for the accounting people, but everybody else in engineering, sales, logistics, etc. figured things out for themselves and within a few weeks any remaining problems had faded into the background noise. The main problem remaining was that Outlook was buggy as hell and email and calendaring were not as smooth and as well integrated as the old solution (and still hadn't caught up years later when I left that company). We just had to live with it though.

So, we've been through this before, and the usual reasons cited by people for why we can't switch from Windows to Linux were not seen as any sort of barrier when we switched from non-Microsoft products to Microsoft products when the latter were seen as being a cheaper alternative to the then industry standards.

The real reason that Microsoft is dominant on the corporate desktop has to do with they have the global business connections to make these sorts of deals with businesses and developers. None of the large "Linux" companies are really interested in tying up their capital to duplicate these business connections in what they seen as a stagnant "legacy" market when the real market growth is elsewhere.

Windows is the IBM mainframe of the desktop. It's not going away any time soon, regardless of how much better the alternatives are (and Linux, at least in the form of Ubuntu, is definitely better on the desktop than Windows from a user perspective).

New York Times sues OpenAI, Microsoft over 'millions of articles' used to train ChatGPT

thames

Re: It's all about profit

You don't have to register your copyrights in the US in order to sue for infringement. Registration just affects the sort of damages you can claim.

If your copyright is registered you can claim statutory damages (an automatic amount) without having to prove actual damages (how much it really cost you). If you want to claim more damages than the statutory amount you can, but you have to offer proof of the value of the loss.

If your copyright is unregistered then you cannot claim based on statutory damages and have to prove actual damages, which means showing proof that you actually lost money due to the infringement.

What registration of the copyright does is basically make it easier for large companies to sue small infringers because they don't have to prove that the infringement actually cost them any money.

China's Loongson debuts processor that 'matches Intel silicon circa 2020'

thames

Re: Forget performance, what about availability and documentation?

According to the press release it runs several different Linux distros, including Kirin, Euler, Dragon Lizard, and Hongmeng. The first one may be an Ubuntu derivative for desktop, but I'm not entirely sure as there are multiple projects with similar names. the second two are CentOS derivatives from Huawei and Alibaba. The fourth is an open source Android derivative from Huawei. They're all existing current Linux distros that have simply been ported to Loongson.

There's also a big range of development tools ported to the architecture, including GCC, LLVM, Go, Rust, Dot Net, etc. There are audio and video accelerator codecs, They said they are working with and contributing code to nearly 200 international open source communities, they're not working in isolation on this.

thames

Re: Not big

Loongsoon hasn't been big in China because Intel and AMD chips had been available for import at competitive prices.

However, now that the Americans are embargoing sales to China, Loongsoon has been effectively granted a protected market to grow and develop in. The Chinese market is huge, so even if they don't see a lot of export sales (outside of embedded applications), they can still sell a lot.

If the Chinese government had been the ones to make the decision to exclude Intel and AMD from this market segment in order to promote sale of Loongsoon CPUs, the Americans would have been the ones complaining about protectionism.

thames

Re: Fake benchmarks though

Neither the author of the story nor the presenter at the conference made any claims about their 2.5 GHz chip matching an Intel 6 GHz chip in terms of performance. The story clearly states that it was being compared to "a comparable product from Intel's 10th-generation Core family, circa 2020". Anyone with a technical background would know that they would be comparing chips of a similar clock rate. Suggesting otherwise is being rather disingenuous.

The conference statements (linked in the story) show that the chip at introduction was being targeted at the broader desktop market, rather than the top end niche gaming computers which Intel said their fastest chips were aimed at.

The announcement title also suggested that they are aiming the first chip in this series at the mid-range PC market. They are clearly looking at mass market PC sales. They have server chips under development which will be announced later. If you want to see how their fastest server chips do, you'll have to wait for those to come out.

You have provided no evidence that any of the benchmark results were "fake" as you claim. You have just done a lot of hand waving to distract from the fact that the developers appear to have been able to design a chip which is competitive in technical terms in the market for which it is oriented.

Commercial success is different from technical success, so we'll have to wait and see how well this chip sells in the Chinese market and abroad, particularly outside of government sales.

I suspect that we will be seeing similar announcements coming out of India in about 10 years time or so, except based on RISC-V. They have similar ambitions as China with respect to IT technology independence, and for similar reasons.

Will anybody save Linux on Itanium? Absolutely not

thames

Many years ago I worked for a company that used DSP accelerator boards from a major vendor (number one in their field) in PC based test equipment that we built. The DSP chip was the DSP32C from AT&T. I wrote the software, and benchmarks showed that we could only meet the required performance in terms of how long a test took by offloading the heavy data array calculations from the PC running the overall system to the DSP board.

A few years later I was at a seminar run by our hardware vendor about their new products. They told us they were dropping the DSP board product line as it was no longer really necessary. The latest mainstream x86 CPUs had gotten faster, and more importantly they now had SIMD instructions. It was the SIMD instructions in the DSP which had made it so fast. I later did some benchmarks on newer hardware and found this was so.

One big advantage of integrating the SIMD in the general purpose CPU by the way is that you no longer lose so much time shuffling arrays of data back and forth over the bus as you do different sorts of operations on it.

There was still a market for specialized DSP chips, but it was increasingly in certain specialized embedded applications where close integration with application oriented hardware features was important.

Royal Navy flies first mega Mojave drone from aircraft carrier

thames

Re: Probably the future of carrier operations

The immediate use case driving this current project is to find a replacement for Crowsnest (radar mounted on a Merlin helicopter) by 2030, which is when Crowsnest is scheduled for retirement.

The current plan is to have drones take over a lot of the routine monitoring and surveillance jobs that manned planes would have to be otherwise used for, freeing up the manned planes for more complex jobs. The latter includes both F-35s and Merlin helicopters. Drones have lower operating costs than high performance manned aircraft, and it's the operating costs, not the purchase price, which dominate the overall lifetime costs.

The issues being looked at include factors such as how well the model being evaluated will take-off and land in all sorts of weather and sea conditions, and how to deal with safety issues such as making sure the drone doesn't crash into parked aircraft on the deck in the event of a bad landing. Since there's no pilot, I imagine the options for ensuring the latter are probably a lot more "robust" than would the case if there were a pilot in the aircraft.

GNOME developer proposes removing the X11 session

thames

Gnome need a reality check when it comes to Wayland's problems

I wouldn't care whether I was using Wayland or X if it wasn't that some really basic essential features in Wayland were still not working and if there weren't Wayland related bugs in things as simple as text editors (which would appear to be Gnome Wayland bugs).

I used Wayland by default with Ubuntu 22.04 until I got tired of having to log out and log back in with X whenever I needed to use an application which wasn't supported under Wayland. Now I just use X all the time and don't miss anything that Wayland supposedly offers.

I don't care about remote desktops. I don't care about multi-monitor support. I do care though about basic features which Wayland developers have acknowledged for many years as being something they needed to address but never seemed to get around to for no discernible reason.

I suspect that what will happen is that Gnome will follow their usual practice and just make Wayland non-negotiable whether it's ready or not. Then Fedora will adopt the new Gnome while nearly everyone else sticks with an older version of Gnome. Most application developers will go with the majority market share and do whatever most distros do, which will be to stick with an older version of Gnome.

Red Hat was able to ram Systemd and Pulse Audio down everyone's throats because they didn't affect most desktop users directly. For Wayland though, the deficiencies are serious enough to be absolute show stoppers for many people.

Page: