* Posts by thames

1228 publicly visible posts • joined 4 Sep 2014

Page:

Automatic UK-to-US English converter produced amazing mistakes by the vanload

thames Silver badge

Re: Had to think abute that ;)

Canadians do not pronounce "about" as "aboot". "Ou" is pronounced in the middle to front of the mouth, while "oo" is pronounced further back in the mouth.

However, the way Americans pronounce words will cause their brains to interpret sounds in the way they expect to hear them based on other surrounding consonants and vowels. Thus they imagine they hear "oo" when objective analysis of the sound waves shows otherwise. I assume the same phenomenon applies in Australia.

There are variations in accent in different parts of Canada, including differences in rural versus urban accents, but I doubt that the person you heard was saying "oo".

thames Silver badge

Re: Whoops

There are also Canadian Multi-lingual keyboards, which are meant for use with both English and French. They are a basic QWERTY layout, with the addition of a few extra keys to give accented characters, plus the right alt or ctrl key will give access to a whole bunch of extra characters unrelated to French, such as common fractions, ligatures such as æ ( ae written as one character), various currency symbols, etc.

They aren't commonly used though because American layout keyboards are a bit cheaper.

thames Silver badge

Re: Whoops

In terms of things used to erase pencil marks, "rubber" and "eraser" are used interchangeably in Canada. There is however a common joke where you pretend to misinterpret what someone meant when they said "rubber" in order to try to get an embarrassed reaction from them.

Metal maker meltdown: Nucor stops production after cyber-intrusion

thames Silver badge

I don't know about this particular case, but pretty often it's something like the MRP/ERP system that has the problem. That has to be connected to the outside world in order to do anything useful, such as take orders from customers or order material from suppliers.

The actual manufacturing equipment may be just fine, but if the MRP/ERP system is not working then manufacturing won't know what they should be making. The company won't want to use material making stuff they can't sell, not just because they don't want to carry inventory, but also because they need that material to make stuff they are committed to customers to supplying.

So, customer 'A' located hundreds of km away places an order, the order goes into the MRP/ERP system, it gets batched up with orders from other customers into economic and technical batch sizes, material to fulfill that order gets ordered from suppliers, shippers get contracted to deliver the material, material gets received and receipts issued, the material goes into a receiving / storage yard or warehouse, materials handler people get work orders telling them what material gets moved where and when, production gets an order telling them what they should make and when, batch recipes matching the order and the material get downloaded from storage somewhere into the equipment, QA reports based on production data get generated and stored, etc. And then all that gets repeated in the reverse direction so the finished order can get out to the customers. And of course all of this data must be available to the bean counters so they can decide whether or not the company is making any money on all this.

All of that is part of the manufacturing operation, and if the IT systems which do all that are shut down, then everything grinds to a halt. There may be no manual back up for this because the people who knew how to do all this manually are long gone years ago, and there aren't enough people there now to do it manually anyway.

When you order that thing you wanted off the Internet, eventually that order has to get to the machine which actually makes it. If you can't order the thing, it won't get made.

Realistically, the solution is to have good backup and recovery processes and to actually test them regularly to see if they work.

CISA mutes own website, shifts routine cyber alerts to Musk’s X, RSS, email

thames Silver badge

Re: Big up RSS

I'm using Linux, so I use Lifrea, which seems to be the main RSS reader for Linux. I don't have any suggestions for Windows.

thames Silver badge

Re: Big up RSS

RSS is definitely the way forward for information such as this. It shouldn't be a big problem to simply automatically echo the RSS feed to Twitter and other social media networks if for some reason someone prefers that.

If CISA genuinely are short of manpower, then they don't have the people to manage any sort of social media presence that allows responses (where you have to monitor for and deal with troublesome idiots). Just publish an RSS feed and have scripts automatically echo this out to several different social media networks on accounts that don't allow responses.

From a user perspective, it's easy to write scripts which simply wget the RSS feed, apply some rules to extract information of relevance to you that you need to know about right now, and have it appear in the notification area on your PC. Your RSS reader (you should already have one anyway if you are at all sensible) can read the entire feed for you to scan over and do a quick review on later when you have time.

I don't know who has time these days to try to keep up with things on Twitter. I certainly don't.

Redis 'returns' to open source with AGPL license

thames Silver badge

Re: Redis 'returns' to open source with AGPL license

Yes, I suspect that most Linux distros will package Valkey instead of Redis, and most users will simply use Valkey unless they have an especially compelling reason to use Redis. I have doubts about the Redis company being able to provide that compelling reason, so I think they'll just fade away.

Anthropic calls for tougher GPU export controls as Nvidia's CEO implores Trump to spread the AI love

thames Silver badge

There's an obvious way this is all going.

In a few years from now when most of the world is running Chinese AI models on Chinese AI accelerators made on Chinese semiconductor machinery, the people who came up with this grand plan will be telling us that this result was all the fault of the devious Chinese and their nefarious plans.

OpenBSD 7.7 released with updated hardware support, 9Front ships second update of 2025

thames Silver badge

Re: Re. OpenBSD's partitioning scheme

I get the impression that most OpenBSD users are using it as an appliance-like server. They set it up to do one thing and it just does that one thing. They interact with it on occasion via SSH. In a minimal install It doesn't seem to get a lot of updates, so it's fairly painless from a maintenance perspective. As a result of this, there probably isn't a lot of pressure to change the file system compared to other things that may be on the "to do" list.

thames Silver badge

No problems so far

I installed OpenBSD 7.7 in a KVM VM on Sunday evening when it came out. I use it in a testing environment, running a series of automated tests alongside a dozen other targets. I have had no problems with it so far, it worked just like 7.6 except for updated packages.

Vector search is the new black for enterprise databases

thames Silver badge

Re: LLM for mangle ment twaddle

What is really needed to make AI useful and able to increase productivity is an LLM that will attend meetings and write status reports. Then another LLM can read the status reports and use this input to issue emails to tell the other LLMs to work harder. These emails can then be read by still more LLMs which also integrate input from image recognition cameras focused on motivational posters. This output from these LLMs is then fed back into the meetings to close the loop. This will automate the entire meeting / status report business function, greatly increasing productivity and business profitability.

This is the future, I can see it coming.

Canada OKs construction of first licensed teeny atomic reactor

thames Silver badge

Re: No way !

Engineering, procurement, and fabrication for these reactors is taking place in Cambridge Ontario by a company who are subcontractors to GE Hitachi. The same company are also suppliers to CANDU reactors.

According to a Globe and Mail story published yesterday, AtkinsRéalis (formerly known as SNC-Lavalin) have shortlisted the same company for fabrication work of new CANDU reactors to be built by Bruce Power at the existing Bruce site, and for a proposed plant at Wesleyville near Port Hope (east of Toronto) by Ontario Power Generation (the same company as are building the reactor mentioned in the story).

The Wesleyville may have up to 8,000 to 10,000 MW of new nuclear generating capacity according to OPG. Given these sorts of numbers, it would not surprise me if these are 8 to 10 1,000 MW reactors rather than SMRs. AtkinsRéalis therefore will likely propose their 1,000 MW Monark CANDU design for this.

New generator plants are also being looked at for the Nanticoke (on Lake Erie) and Lambton (near Sarnia) sites.

Given the number of MW of capacity needing to be built, I'm not sure that OPG will build any more SMRs beyond the planned 4. I think they needed these 4 to be built ASAP due to the retirement of Pickering A (Ontario's oldest full size nuclear power plant), and that was the overriding consideration in that case.

thames Silver badge

Licensing is done in phases to take changes into account

Licenses are granted with conditions attached, so these proceed in phases. With the site prep work phase more or less done the licensee can show that the work so far was completed according to license terms and so is ready for the next phase.

If problems were encountered during the site prep which required changes to the reactor construction phase, then this can be taken into account in this licensing phase. If they didn't do it this way then the licensee could find themselves in the position of work being completely stalled because of unforeseen problems that weren't covered in their original license submission.

When the reactor is built then they can conduct testing and show that it is ready to operate and their staff are trained on it and so can get their operating license.

Pretty much any major project will go through multiple phases of approval from someone, as there needs to be oversight into whether a project is actually going according to plan or if it has gone off the rails. Despite what some people may tell you, nuclear power is very closely regulated and experienced regulators are involved in oversight at all stages of construction and operation.

As for US regulatory approval, the US have signed a cooperation memorandum with Canada and are getting copies of all the relevant data on this project as it proceeds. They will then use that information to make their own licensing decision. I suspect that once the reactor has started up in Canada, the US regulators will revue the Canadian regulatory data and then sign off on their own approval for the US.

Arm reckons it'll own 50% of the datacenter by year's end

thames Silver badge

Re: Maybe, but not this year

I have C and Python software projects which I test on a dozen platforms, including 32 bit x86, 64 bit x86, 32 bit Arm, and 64 bit Arm. There are a total of 3 different operating systems (not counting different distros) and 3 different C compilers as well. I use a combination of VMs for x86 and Raspberry Pis for Arm. The testing system is automated and will run tests across different OS and CPU platforms in parallel. Test reports are summarized so I just have to look at totals to see if there are any problems. Individual test results are bundled together and saved in tar.gz files for each test session.

If you aren't doing something like this for at least one platform, then you aren't really doing serious testing. My experience with the above has told me that having the same OS on the desktop and server matters a lot more than having the same CPU chip. Your argument is basically an argument for running the same OS on your development desktop as on the server rather than the same CPU chip.

If you only care about one target platform, then make a VM for testing. Use the VM system's command line tools to start up and shut down the VM. Snapshot the VM and roll it back to the base test snapshot before each test. Use SSH to remotely control the test process and use expect if you need command line interaction. Write report scripts to summarize test results.

If you need a different CPU to test on, then get any of a number of different Arm boards to do this with. The testing process is exactly the same as when using VMs on your desktop except that you don't need to start up and shutdown the VM.

My experience with Python is that it doesn't matter whether it runs on x86 or Arm, it all works the same. Linux versus Windows is more of an issue because of different text line endings and how it is easy for the inexperienced to write code assuming one or the other instead of using the built-in platform independent features in Python to handle the differences.

With C you have to take more care, but that's where testing comes in. You need to test and test and test, but you should be doing that anyway, which is why you need to automate your testing process.

Where differences in CPU chip come into play in practical terms is in relative benchmarks. Something where 'x' is 'y' times faster than 'z' on x86 may not show as much of a difference on Arm, or it may be the other way around. That however is an argument for having systematic benchmark tests, as different x86 chips have these discrepancies between them as well, it's not just an x86 versus Arm thing. Sometimes you need to test across a variety of platforms and just come up with a good compromise that you judge will stand the test of time.

thames Silver badge

Re: Maybe, but not this year

Yes, a different OS is a much bigger issue on modern computers than a different CPU chip. In my opinion, if you are deploying on Linux then you should be running Linux on the development desktop.

AI datacenters want to go nuclear. Too bad they needed it yesterday

thames Silver badge

Re: Water

A lot of the American AI companies want to be in the parts of the US which are either very dry or which otherwise have poorly developed regional water infrastructure. Microsoft for example say that 40% of the water they consume comes from areas with water shortages.

The obvious solution to all this is to locate big data centres in cool climates with good electricity and water supplies, but that's not how the tech industry works. Instead they usually prefer to locate in areas with poor infrastructure because they can pay lower taxes there.

Credible nerd says stop using atop, doesn't say why, everyone panics

thames Silver badge

Doesn't seem to be installed on any distros that I checked.

I just ran a quick automated check on a number of platforms by running "command -v atop", and none of them showed up as having atop installed.

The list included: Debian, Ubuntu, Raspberry Pi, Alma (Red Hat clone), Suse, Alpine, OpenSuse, Open BSD, and Free BSD.

I suspect that if atop is present on any mainstream Linux or BSD distro, then either the use installed it explicitly, or it was pulled in as part of something else (I don't have any suggestions on the latter).

If anyone did install this optional package, then it's probably not a big deal to remove it.

VMware sues Siemens for allegedly using unlicensed software

thames Silver badge

Re: This sounds like a bit of a mess

Keep in mind that this is Broadcom's slant on what is going on, the actual facts may be different.

I've done business with Siemens US in the past. They have the attitude that because they are Siemens you will do what they say and you will like it. That included changing the terms of the contract after it had already been agreed upon and demanding that you accept it.

If would not surprise me if what happened in this case is that Broadcom presented an invoice for 'x' licenses for 'y' dollars, and someone in Siemens purchasing returned a signed version with the numbers changed tor 'x + 10%' licenses for 'y - 10%' dollars to see if Broadcom would swallow it. This is entirely in line with how companies like Siemens US do business.

Broadcom may have suffered a sense of humour failure and decided to "play hardball" as the Americans put it, launching a lawsuit over a technicality to show the other side just who is boss. I expect this to get settled out of court when one side or the other backs down.

Since Siemens US can't exactly suddenly rip out all their VMWare installations, it won't surprise me if Siemens are the ones to back down over this. The revision to the numbers suggests that this is exactly what is happening.

I imagine that a lot of big companies out there feel they have enough negotiating leverage with Broadcom due to their size that they won't be affected by Broadcom's new policies. They may find out that this isn't the case. If so, then I won't be surprised if a number of big companies start to bail out of VMWare in a few years time.

Microsoft tastes the unexpected consequences of tariffs on time

thames Silver badge

Re: It is unclear.. what problem the video was intended to fix

I expect the problem the video is intended to fix is to help stop the flood of bogus AI generated "bug reports" which are flooding everyone with prominent projects.

People running AI systems are running their LLMs over projects and creating automated bug reports about non-existent problems. These AI systems can generate very plausible sounding reports which someone must spend considerable time investigating before finding out that the "bug" being reported does not exist. The people responsible for these AI bug systems are not checking their output before they send them in, they are just running the software in auto mode and firing off large volumes of whatever comes out.

A video would be a barrier to these sorts of reports because there is as yet no publicly available compendium of video bug reports which people can use to train their AI systems on to generate new ones.

Basically, the video requirement is a sort of spam filter, with AI generated bug reports being viewed as little more than spam in terms of their usefulness.

As to why people are creating these AI generated bug reports, I suspect that at least some of them targeting open source projects (I don't know the situation is for Microsoft) are using them to fine tune their AI models for use as coding assistants. A bug report is an ideal means of training. A known AI generated report is turned into a precisely known output in terms of code. You then feed that back into the model as very high quality training data and continue in a never ending feedback loop. False reports get rejected, and those rejections are also feed back into the model. If the overwhelming majority of reports are bogus, that doesn't matter because a negative report is also useful data to be fed into the model.

If the people behind the AI model had to do this themselves it would cost a lot of money to hire people to review AI output, see if they are real bugs, provide fixes to feed into the model, and turn them into training data. However, by using public bug reporting systems all of the very expensive labour for this is provided free of charge to the AI company. The costs are borne entirely by the people running the bug reporting system, who get flooded with huge volumes of AI generated reports they have to spend an inordinate amount of time sifting through before they can reject them.

This is one of the "benefits" of AI which society is having to deal with these days. Just like spam email became ubiquitous because of the low cost of sending out automated emails, many, many, other forms of communication will in future suffer the same fate due to AI, and bug reports are a form of communication. It's simple economics.

Tesla Cybertruck recall #8: Exterior trim peels itself off, again

thames Silver badge

Re: This ladies and gentlemen...

I have a good deal of first hand experience with auto manufacturing with several US, Japanese, and European car brands. US Ford and GM cars are reasonably well made. I don't like their current model line up in North America, but they're reasonably well made. Chrysler has always had a reputation for being less well made, but I would still put them as better than Tesla.

Tesla though have always been known for poor build quality and questionable design engineering. They have always sold purely due to brand promotion and image. They got into the US electric car market early and a heavy focus on the California market got them a lot of publicity with celebrities and people who write about celebrities.

They basically sell a Lada or Trabant grade product at BMW prices. If you want an electric car you are much better off buying one from one of the long established major brands who will sell you a much better made and designed car for the same price or less.

I won't be surprised if Tesla end up going out of business or if their assets are bought by someone else for a small fraction of their current stock market value. They are a luxury brand that sell a poorly made product based on image. That image is being steadily undermined by the antics of a certain rather questionable person. Once their image is tarnished nobody has a reason to buy one anymore.

AI running out of juice despite Microsoft's hard squeezing

thames Silver badge

Re: What's the next boondoggle?

Perhaps they could replace Gartner with an AI.

thames Silver badge

Re: As much as I agree with the sentiment that AI is overhyped...

Shouldn't the AI be able to figure out what the business use cases are for AI? After all, digesting huge masses of information about a problem, digesting it, and spitting out simple responses based on what others have done is exactly what AI is supposed to excel at. In other words, shouldn't business consultancies be just AIs?

If an AI can't do that, then perhaps AI isn't all that it has been cracked up to be.

Official HP toner not official enough after dodgy update, say users

thames Silver badge

Re: if a customer HP has invested in

I had a Samsung laser printer for home use more than 20 years and it was a very good printer, very reliable, and didn't give me any trouble. I just plugged it in and it was instantly recognized by the OS (Linux), and printed whenever I wanted to print anything. The toner cartridges lasted for quite a long time.

However, age eventually caught up with it and it wouldn't feed reliably (the rollers had gone hard and wouldn't grip) and nobody around here seemed to sell them anymore, so I looked for alternatives. I looked for a Brother as they are highly regarded, but the ones on sale here seemed to be just the more expensive models meant for business use.

I ended up buying a Pantum and I'm quite happy with it so far. The shop assistant told me that Samsung had been bought by HP and a lot of customers these days want nothing to do with HP. I replied that I wouldn't touch HP with a barge pole (the Samsung had replaced an HP printer).

I haven't had the Pantum long enough to form a definite opinion on it (I'll wait a few months before that), but so far I just had to plug it in and the OS recognized it immediately and it worked flawlessly and printed what I needed to print.

I'm of the opinion that unless you have a special use case there's no reason to buy an ink jet printer these days. A B&W laser printer does the job for most people and is much cheaper to run because the ink (toner) cartridges won't have dried up and needed replacing whenever you need to use them.

Cheap 'n' simple sign trickery will bamboozle self-driving cars, fresh research claims

thames Silver badge

Re: Spatial memorization and "appearing" signs

As I understand it, the "memorization" is just a short term thing that the car "forgets" once it has passed where it thought it saw a sign, or after an appropriate time-out. Something like this would be necessary to deal with issues such as the sign getting hidden, partially or completely, by shrubbery or other things around it as the car approached it.

In other words the sign may be hidden intermittently as the vehicle approaches it by small trees, other signs, bus shelters, etc., so the vehicle has to "remember" that there was a sign there during the periods when the sign is obscured instead of forgetting it instantly when the sign is momentarily obscured. A human driver would know do this without being told, but the AI system has to be explicitly programmed to take it into account. This is one of the real world problems that any such system has to be able to deal with in an imperfect world.

Some of the image trickery systems seem to work by using patterns that confuse the image recognition system as it passes from one image cell to another due to parallax as the vehicle moves. In other words, the stationary sign appears to "move" from the imaging system's perspective due to the motion of the car, and the patterns applied by the stickers confuse the image recognition system as they slide across the field of view.

I suspect that in the long run if self driving cars become something other than a novelty found on a very small number of cars authorities will install passive RF markers in appropriate places to supplement signs for cars to use instead of relying only on visible signs. If they don't then the self driving feature would become unreliable for half the year due to snow obscuring signs and road markings. A human driver can deal with a stop sign that is covered with snow simply by noting that there is a sign in roughly the spot where a stop sign could be expected to be given the surrounding terrain context. An AI may not reliably make that decision however.

Time to make C the COBOL of this century

thames Silver badge

Re: C is the new COBOL

The issue is not whether programmers are "on side" with a language. The issue is that loads of important existing major software projects are written in C and we are asking what can we do to improve security in those programs.

Sure you could simply re-write them from scratch in whatever your favourite new language is, but that is a huge, expensive, and very long term project and that is the thing that programmers are not on side with.

So the question is, how can we start from where we are now and get better versions of the software we already have. I think the answer is to provide an evolutionary path forward from where we are now instead of telling people to go back several decades in development time and find a different path altogether starting from there.

thames Silver badge

Re: C is the new COBOL

You can write SIMD code in C. I've done it extensively over many years for both AMD64 (x86-64) and ARM (both 32 and 64 bit).

The only problem is that it requires extensions to the C language and these extensions are not standardized. GCC has these extensions, and MS C does as well. Although I haven't used it with MS C, the extensions that I did look at in the documentation were very similar to GCC's except for the actual name of them (which I suppose could be papered over with a macro). You can't do it with Clang/LLVM however, or at least not the last time that I checked, as it didn't copy that feature from GCC.

Using SIMD effectively often requires using completely different algorithms than used for architecture independent code, so there's no way for the compiler to automate using SIMD instructions when working from architecture independent code except in the simplest and most trivial cases. It also means you often need different code for different chip architectures due to SIMD simply working differently with different chips.

Using the C extensions is still far, far simpler than writing the code in assembly language however, so they are very good to have. All they really need at this point is for the names to be standardized between compilers.

thames Silver badge

Re: C is the new COBOL

Yes, C is the new COBOL in the sense that nobody has come up with a viable backwards compatible language which would allow a gradual migration of an existing program to the new language. COBOL programs can realistically only be replaced with entirely new programs, not gradually migrated. This means that COBOL programs often live long beyond the lifetime of their creators.

So the people who see C as equivalent to COBOL are essentially saying that C isn't going away anytime soon, and it will be underpinning the foundations of the computing world for decades to come, and that anyone who wants to work closely with those underpinnings is going to have to know C.

C is not my favourite language, but I'll work with it if that's what I have to do to get the job done. And a lot of jobs that I need to get done require knowing C.

The problem with a lot of proposed replacements for C is that they look like the new Ada, languages that were oversold on their supposed security and reliability benefits and which will fade away once they are no longer trendy. It shouldn't surprise us that the US government were the main driving force behind Ada back when it was relevant. It seems they want to repeat that "success" again today now that a new generation of managers have taken over there.

The one supposed successor to C which has been successful is C++. That also is not my favourite language, but there is no denying that it has been hugely successful and will no doubt be with us for a very long time to come. The reason why C++ was hugely successful is that it was designed as a backwards compatible development of C. That provided a migration path for those who wanted one. C++ is far from perfect, but it was a very practical solution to a real problem.

What is needed for a C replacement for existing projects is a language that takes a page out of C++'s book in terms of extending the language, but this time in a very minimal way, to solve the perceived problem while remaining backwards compatible. It doesn't have to incorporate every new feature that people have dreamed up since C was first created, it just has to solve the problems that are found in most major existing C projects.

Our world faces 'unprecedented' spike in electricity demand

thames Silver badge

Re: "Very little weapons grade material, if any, has come from commercial reactors."

Post WWII UK nuclear development was based on the joint UK-Canada nuclear weapons program which operated during WWII, located in Canada and entirely paid for by Canada. They were apparently working on a plutonium bomb. The first reactor (a heavy water design) was turned on at Chalk River north of Ottawa a week after Japan's surrender. Rutherford's pre-war Nobel winning research on the atom by the way was done in Canada in Montreal, very close to where the nuclear scientists set up shop for weapons research during the war.

The UK and Canada had an on again off again relationship with the US Manhattan project. The US cut off cooperation once they got all the technology they wanted from the UK because they were concerned that UK companies would dominate the post-war civil nuclear industry because ICL (a UK company) held certain key patents on civilian uses of nuclear power (bought from a French member of the UK team).

After the war ended Canada and the UK split on nuclear development and the joint bomb project got dropped as no longer urgent. Canada then focused entirely on civilian applications. Canada was actually willing to finance the WWII bomb project mainly due to having an eye on post war civilian applications, which were expected to revolutionize industry (this expectation was overblown it turned out).

The UK meanwhile took all the joint research home to work on a bomb but were working on a shoe string budget due to Britain's dire financial condition.

Subsequent Canadian reactors continued to use heavy water while UK reactors used graphite for a very simple reason. The heavy water plants were in Canada (Canada had supplied the US with heavy water for the latter's R&D during the war) while the UK had no such plants and foreign exchange to import it was very tight in the immediate post war period. So, they went with graphite instead, which they did have.

thames Silver badge

Re: "My gripe with the nuclear path for the last 40 years is we are still

The plutonium for India's nuclear weapon was not produced in a CANDU reactor, it was made in a CIRUS reactor. This was also a Canadian design, but it has no similarities to CANDU other than also using natural uranium fuel.

As a "tank" style research reactor it is more suited to being modified, tuned, and operated to produce weapons grade plutonium than a tube style power reactor such as CANDU.

Just as with uranium, there are different isotopes of plutonium. Only one isotope is considered to be useful in making a nuclear weapon. It is more difficult if not impossible with existing technology to separate that specific isotope from the others than it is to separate U235 from uranium to make a bomb. The only practical way to make a plutonium bomb is to avoid making the undesirable isotopes in the first place, and that requires a reactor designed, tuned, and operated for the purpose, or one which can be modified to do so, such as a tank style research reactor.

The UK's Magnox reactors were designed to produce weapons grade plutonium with electric power as a byproduct. You can't make a plutonium bomb using normal spent fuel from a normal power reactor. No technology to do so exists at this time.

There's a reason that Iran and North Korea spent vast sums of money on uranium enrichment plants to make nuclear weapons rather than making a plutonium bomb from spent fuel. After looking at all the options, they decided it was easier and cheaper to do it that way.

Under international nuclear control treaties there are defined levels of certain isotopes of uranium and plutonium which separate "civil" from "weapons grade" material. There are no weapons issues with respect to civil plutonium based fuels than there are with civil uranium based fuels.

thames Silver badge

Re: On the upside.

There is no reason for the UK to buy fuel from Russia when they can buy it from Canada, Australia, or any number of other places instead. Russia are simply offering cheap enrichment and fuel fabrication services.

I think a lot of the fuel Russia exports is actually made from uranium imported from Kazakhstan. Customers contract Russia to process this uranium purely on the basis of price. There's nothing stopping the UK from doing this itself.

thames Silver badge

Re: On the upside.

Anon said: "Firstly, Canada does no such recycling/reprocessing, so I am curious as to why you think it ever had such a programme."

Canada used mixed oxide fuel recycled from nuclear weapons in the 1990s to reduce nuclear stockpiles. However that was a program to reduce the number of nuclear weapons in the world. Uranium is so cheap and abundant at this time that recycling fuel is not economically justifiable. Canadian reactors use natural uranium, so converting from ore to fuel pellets requires relatively little processing and so is very cheap. The R&D to recycle the fuel has been done, but is currently waiting for uranium prices to rise to make it worth while.

Anon said: "In fact I believe THORP still has the world's largest stockpile of Pu sitting up there even today."

The THORP plant was built to produce fuel for "fast" reactors, not the "slow" reactors which are what are used across the world today. It was based around the premise that uranium prices would rise significantly, making "fast" reactors more economic to operate. However, new huge high grade uranium mines in places such as Canada and Kazakhstan rendered those predictions obsolete, leaving THORP without a market. The French recycling process was based around producing fuel for conventional "slow" reactors, and it turns out they put their bets on the technology that came out ahead in the market.

Anon said: "Perhaps you mean Thorium or other fast breeders ..."

Thorium reactors are not fast reactors, they are slow reactors, just like most reactors in use today. The fuel would be very similar to what the French use in their recycled fuel, just using thorium in place of U238. India uses thorium fuel as part of their fuel load in their existing reactors, which are derived from Canadian designs. If you use a Canadian style natural uranium reactor there is nothing stopping you from using thorium as fuel in it. However, as the fuel requires essentially the same processes to make as recycled uranium fuel, there's no economic incentive to use it unless like India you are trying to conserve scarce domestic uranium resources and use more abundant thorium instead.

thames Silver badge

Re: On the upside.

Canada used mixed-oxide plutonium-uranium fuel for some reactors in the 1990s as part of a program to reduce the world stockpile of nuclear weapons. The reactor operators were being paid to "burn up" the weapons grade plutonium in their reactors. However, once that program ended we stopped using that type of fuel.

The reason it isn't being done today is that uranium is so cheap and abundant that it isn't economically worthwhile for Canada to recycle fuel at this time. France do it in order to reduce the amount of uranium they have to import for national security reasons, but Canada is a major uranium exporter so there's no incentive on those grounds.

Canada has developed multiple processes for recycling nuclear fuel in existing reactors, but these will sit on the shelf until such time as uranium prices rise enough to justify it from an economic perspective. Since Canada's reactors are very efficient at using fuel, the recycling processes are much simpler and less expensive than those currently used for other reactors.

Canadian style reactors can also use thorium, with minor modifications. India uses reactors derived from Canadian designs and these currently use thorium as part of their fuel load as India have lots of thorium but much less uranium. However, thorium fuels are more expensive than uranium, so again there's no incentive for Canada to use them at this time.

After clash over Rust in Linux, now Asahi lead quits distro, slams Linus' kernel leadership

thames Silver badge

Re: C-Derived Programming Language + Memory Safety

Rust does not offer absolute guarantees of memory safety. Memory safety has to be turned off to do many things that need to be done, particularly with respect to working with Linux which was not designed around a memory safe language and so has many features which have to be used in an "unsafe" manner.

This means that just using Rust does not in itself provide any additional memory safety guarantees. You would have to write an entire OS from the ground up in Rust in order to take advantage of Rust's language features. And if you are going to do that, why would you start with Linux? Just write the new OS without worrying about Linux. If it's really that great and secure, then people will use it based on a better security record and Linux can be gradually retired. I have no emotional attachment to Linux.

If on the other hand the goal is to improve Linux, then the right tool for the job is one which works as well as possible with the existing code base allowing for incremental improvement instead of a multi-year big bang rip and replace. Rust is not designed for incremental change, it's all or nothing.

Right now Rust is in a relatively isolated corner on its own with minimal interaction with the rest of the kernel. Nobody has the time or resources to rewrite large parts of the kernel in Rust that are currently working well enough as they are, even if there may be potential memory safety issues that nobody has found yet.

In other words, there is currently no viable plan of action under which Rust will ever become the sole or even main language used for Linux. If the goal is better security through better memory safety, we won't get there by using Rust.

If the goal is to improve Linux rather than use Linux to promote Rust, the only realistic plan is to use a language which is so closely derived from C that changes to existing code to use it are minimal while memory safety features are additions or extensions to syntax which don't conflict with backwards compatibility. That would allow a gradual function by function change within the same file during the course of normal development and bug fixing rather than having to replace entire subsystems in order to use it at all. This in turn will be something that happens over the span of decades rather than all at once.

I don't believe that this language exists yet. However, there are many people working on the memory safety in static languages who are currently taking in the current lessons with regards to the issue of improving memory safety in existing projects with large code bases and they may come out with a suitable solution within the next few years. Their scope would be much more focused than that of the Rust developers as they would not be designing a whole new language.

They would also not need to come up with a perfect solution. Something that offers a significant improvement that falls short of solving 100% of the problem but is actually easily assimilated into existing projects will make much more of a practical difference than something that offers a theoretically better solution but has no practical path forward.

As I said before, I'm not a fan of C. I use it because it's the language that I need to use in order to work with existing systems. I have learned many different languages over a long career, and continue to learn new ones now.

However, my objective is always to produce useful software, not to promote a language. To me, the language is just a tool to get the job done. If the goal is to improve security in Linux, then Rust is simply the wrong tool for the job and I wouldn't recommend using it for that anymore than I would recommend using Ada or Modula 2 in Linux, regardless of how the latter two languages may be objectively "better" than C.

thames Silver badge

I'm not a fan of C as there's a lot of things I don't like about it. However, I use it anyway because that's what a lot of existing software uses and I need to work with that.

I really liked Modula 2 and at one time wished that was the popular language instead of C. But it didn't become popular so I faced up to reality and learned and used C and forgot about Modula 2. I would rather have a project that has a future and is written in a language that lots of other people know than a project that just happened to be written in a language that I liked but was used by nobody.

I'm sure there are nice things about Rust as a language. However, it isn't the language that unix, Linux, and loads of associated software were written in, and the disadvantages of a multi-language project are significant and unduly burdensome on developers.

What I would prefer is a language which is derived from C (Rust is only vaguely C like) but has seldom used features removed and memory safety features added in. This would allow a sensible upgrade path without having to rewrite everything.

Rust in Linux is currently isolated to a particular subsystem which has limited contact with the rest of the kernal. However, sooner or later it will hit a wall in terms of adoption in existing projects because expanding use of it will require rewriting loads of existing code that was created, tested, and proven in use over several decades. This is a recipe for failure.

I got over being a programming language fan years ago, having learned multiple different languages that came and went. I like writing software that accomplishes a goal, and the language itself is just a tool to get the job done. In my opinion Rust may be fine as an abstract language, but it's the wrong tool for solving the problems that Linux has. I don't think the right tool exists yet, but it won't surprise me if the right tool is created within the next few years now that the problems with using Rust in existing projects have become more apparent.

Amazon-backed X-energy bags $700M more for itty-bitty nuke reactors that don't exist yet

thames Silver badge

SMRs in Canada

There is an SMR currently under construction just east of Toronto, with 3 more planned to be built alongside. The first one will be on line and delivering power to the grid before X-Energy's most optimistic projections for their first one (which hasn't even got construction plans yet).

However, it uses conventional low fuel available from commercial suppliers, instead of the special medium enriched proprietary fuel such X-Energy is using. A single SMR of this type will also put out about as much power as 4 of X-Energy's. Or to put it another way, you would need 16 of X-Energy reactors to equal 4 of the ones being built in Canada. I doubt that X-Energy can deliver their reactor for a quarter of the capital investment, and I doubt that their proprietary fuel will prove to be cost competitive with conventional reactor fuel.

What is more, X-Energy claim their reactors don't need containment buildings. I would be surprised if they got approval to operate a reactor based just on their fuel needing no containment, especially as they only claim that their fuel pellets are only rated to contain 99.99% of fission products. I'm not sure that conventional fuel pellets are significantly worse.

The Germans built a pebble bed reactor (which is the generic name for these things) in the 1960s. It was tested with both BISO, and later TRISO (the type X-Energy intend to use) fuel, and also gas cooled with helium.

It was a failure to put it mildly. Temperature control was difficult, and the fuel pebbles emitted contamination and whole system was contaminated by fine radioactive dust from the fuel. Radioactive material leakages contaminated the site, and the whole thing ended up having to be filled with concrete to try to fixate the contamination in place. Decommissioning and dismantling are apparently expected to take until the end of the present century.

Not deterred by this, the Germans built a second, larger version. This suffered from even more problems. Dust and debris from the pebbles (they circulate through the system) plugged cooling channels, and the pebbles themselves would get stuck. The Germans finally gave up on the idea at the end of the 1980s. Several other countries have built systems since, none has been especially successful.

All reactors of this type use graphite in their outer layers as a moderator. This graphite coated fuel is one of the designs weaknesses, not a strength as they like to put it. It was graphite moderator fires which caused the Windscale and Chernobyl accidents, not fuel melt downs.

The pebble bed reactor is one of those things which sounds simple when sketched on the back of an envelope. However, they have all been complex and difficult to operate in practice once the real world intrudes into theory.

My opinion is that they offer no real advantages in terms of safety, cost, or simplicity, and are a technological dead end.

The main attraction that these and other unconventional SMRs offer is to the companies that will supply the highly specialized proprietary fuel for them. It will be a license to print money for the company that holds the rights to the fuel.

The point of Small Modular Reactors is the "modular" bit. It applies the same sort of modular construction techniques used in the latest shipyards to build ships in blocks which are then assembled ready to go rather than building piecemeal on site. Conventional reactor technologies can be adapted to modular construction without having to use totally new reactor designs or exotic fuels. I expect that these are what will be commercially successful.

VMware migrations will be long, expensive, risky, Gartner warns

thames Silver badge

Re: Broadcom/ESX migrations to Nutanix AHV are nowhere close to these metrics.

I haven't done this myself, but from what I can gather it's the really big customers with really big installations who will find it difficult and expensive to migrate. It will be much less of an issue for smaller customers.

I suspect that there will be a good business for IT consultants over the next few years in moving small to medium clients from VMWare to an alternative (e.g. Nutanix).

thames Silver badge

I haven't seen the original report, but from what I can gather from what is being written about it, what Gartner are saying is that if you are planning to move, then get on with it now and don't leave it until shortly before your contract with VMWare is about to expire. Too many people are simply waiting to see what everyone else is going to do instead of getting started on it themselves.

They are also saying that it may be difficult and expensive to move, but if you wait it will be even more expensive and difficult later.

So they are basically saying, either get on with it or resign yourself to paying through the nose for VMWare.

How datacenters use water – and why kicking the habit is nearly impossible

thames Silver badge

Re: just a thought

In Toronto the district heating system also provides cold water for air conditioning in summer. The municipal water system draws water from several kilometres offshore deep in Lake Ontario where the water stays cool (4 C) year round. This water is then run through heat exchangers to cool the water in the district air conditioning loops. This then provides air conditioning to several hundred buildings, saving about 75 per cent of the electricity which would otherwise be required for air conditioning. The water being used for cooling is already being drawn for municipal water supply anyway, so there's no additional water being used.

Toronto also gets the majority of its electricity from nuclear power, with more plants being built close by. The nuclear power plants also draw their cooling water from either Lake Ontario or Lake Huron. The Great Lakes are large enough that the amount of heat being discharged into them is insignificant compared to their size. In winter only a very small area outside of the cooling discharge outlet doesn't freeze, so it's easy to see that the amount of heat involved may be large on a human scale but is very small on a geographic scale.

If data centres need lots of electricity and cooling water then they need to start locating in places where these resources are abundant and stop building in places where they are lacking. As can be seen with Toronto (natural cooling water and nuclear power), there are solutions and they are practical even if they are foreign to Silicon Valley.

Boffins carve up C so code can be converted to Rust

thames Silver badge

Re: “Minimal adjustments”

The HACL conversion to Mini-C required only minimal code changes, and the EverParse conversion required no source code changes at all. Just because the C language has some problematic features doesn't mean that your C program uses any of them.

So, hypothetically we could create a "Mini-C" compiler and add things like "fat pointers" and run-time bounds checks to it. Then see if our C program compiles with it and passes its tests. If it does, we're done without using Rust and with no source code changes, just a recompile.

If there are a few spots in the program which need features that Mini-C doesn't have, then factor those out as separate regular C files, and include them in the project, just like Rust does with "unsafe" code. We're then done without using Rust and with limited source code changes.

Perhaps Rust would handle a few corner cases better than a hypothetical Mini-C, but if a C variant can do 90 percent of what Rust provides it would be more likely to be widely adopted sooner and so provide more security in a practical sense when looking at the industry as a whole.

If just replacing existing code bases with a new language was simple, easy, and cheap, all that COBOL code out there would have long ago been turned into Java. That hasn't happened, and I similarly suspect that C will be around for a very long time to come and lots of new C code will be written.

Fission impossible? Meta wants up to 4GW of American atomic power for AI

thames Silver badge

Re: SMRs are a scam

There are economies of scale to most electrical generating plants, regardless of the technology used. For nuclear reactors many of these are associated with the civil works such as site prep, roads, cooling water channels, grid connections, etc. This is why so many power plants have multiple units.

New "conventional" SMR designs have optimized their layouts to have reduced footprints in order to require smaller civil works, and also to use less concrete by making the reactor building more compact (so there is less to enclose).

Existing heavy water moderated reactors, such as the CANDU, which are in commercial operation around the world, can use thorium as fuel. India uses thorium as part of their fuel load in their CANDU derivatives. India do this mainly because they have lots of thorium but much less uranium, and they want to reduce their dependence on non-domestic supplies.

For most countries though, uranium is so cheap that it simply isn't economic to use the more complex and expensive thorium fuel. Reactors fuelled entirely with thorium will not "self-start". They need either enriched uranium or plutonium (reactor grade, not the special isotopes used in bombs) to start the reaction and create U-233 from thorium to run the main reaction.

At this time, it is cheaper to simply use abundant supplies of uranium in a once through fuel cycle than to set up the chemical reprocessing facilities to recycle spent fuel into mixed-oxide (MOX) uranium or thorium based fuel. France do it (for uranium based MOX), but that was a decision made based on national security to reduce reliance on uranium imports.

If uranium prices rise high enough, then there are already reactor designs that can use uranium or thorium based MOX fuels, and these reactors have been in large scale commercial operation in multiple countries around the world for decades. They just don't happen to be used in the US, which is why the US-centric media don't talk about them much.

thames Silver badge

Re: Perhaps there's a ready solution.

The design of the 300 MW Hitachi SMRs which are being built in Canada at this time are also approved by the US NRC. The Canadian Nuclear Safety Commission (CNSC) and the US NRC shared information during the safety design review process. The US plan on building the same SMR in the US and there are MOUs between the two countries with regards to sharing of Canadian experience with this design with the US.

thames Silver badge

Re: There''s SMRs, and then there's SMRs

It's notable that Rolls Royce's SMR is a 470 MW design, about the size of a typical late 1970s reactor, and it uses conventional fuel from commercial suppliers. Again, the emphasis was on the "modular" rather than the "small" aspect of SMR. The design is centred around how to assemble large factory built components with the minimum of time and work on site.It is an entirely different beast from the unconventional micro-SMR designs.

thames Silver badge

There''s SMRs, and then there's SMRs

There are four SMRs already under construction just east of Toronto. These are 300 MW Hitachi designs, being constructed by a long established Canadian nuclear component supplier for a major utility. They are about the same size as the first generation of commercial reactors which were built in Canada (near by to these ones). They will use standard fuel from commercial suppliers. The economics and technology are not really an issue.

What is questionable is the very small SMRs being promoted by companies with no track record in the nuclear industry, no manufacturing facilities of their own, and which use specialized highly enriched fuel not available from existing commercial suppliers. These very small SMRs (100 MW or ever smaller) have very questionable economics as there are significant economies of scale in most power plants. They are simply too small and use very expensive non-conventional fuel.

If Meta want 1,000 to 4,000 MW, then three to a dozen or so 300 MW reactors could do the job for probably much less cost than the very small SMRs being promoted by some companies.

The whole point of a "small modular reactor" is the modularity, the "small" is just a way to get there. The idea is to build as much as possible in a factory and do the least amount of assembly on site. The company which can do that with the largest reactor will have the lowest operating costs.

What Meta should be doing is building their data centres in places which already have a good supply of electricity, instead of putting them in places with no electricity and then looking for someone to build power plants to serve them.

I expect that most of these micro-SMR companies will go out of business without having ever built an actual commercial reactor, while the companies offering ones in 300 MW and up sizes will be selling plenty to utilities.

Cryptocurrency policy under Trump: Lots of promises, few concrete plans

thames Silver badge

To expand on what you said, "orphaned wells" are oil and gas wells where the previous owner went bankrupt and the bankruptcy trustees couldn't find another company anywhere who were willing to touch them because they're worthless, and indeed an overall liability. The wells that are actually worth operating get bought up by another oil company.

In Canada orphaned wells end up in the hands of the provincial government and the cost of sealing them is paid by a fund which all oil companies must pay into (in Alberta that fund is grossly under funded in terms of future liabilities).

The idea of a crypto-mining company somehow succeeding at operating oil and gas wells where the entire industry of actual oil companies looked at it and decided to pass is a bit odd, to say the least.

I find it utterly implausible that anyone promoting this idea actually believes what they are saying about it. The fact that someone is promoting it tells me what should be obvious to anyone, it's a scam.

Trump tariffs transform into bigger threats for Mexico, Canada than China

thames Silver badge

Canada and the US have a long standing treaty which allows either country to send illegal immigrants back to the other. There was a loophole which allowed people to claim refugee status at official border crossings, but this was closed a couple of years ago with an new treaty. Now refugee claimants can get tossed back for the original party to deal with.

The new treaty was signed when Biden was going to visit Canada and was told that the number one issue the Canadian press were going to ask him about was the flood of illegal immigrants from the US crossing into Canada (it's mainly from the US to Canada) through the refugee loophole. Suddenly, a treaty which the US (including Trump) had for many years insisted was not possible for the US to sign was suddenly possible and got signed and approved in short order in time for the visit by Biden.

Prior to that the biggest issue that Canada had with the US was illegal immigrants coming from the US to Canada, while the Americans (including Trump) claimed they could do nothing about it. Canadian opposition politicians were demanding that Canada build a wall to stop it, although nobody was suggesting that the Americans pay for this one.

Most of the border is actually not that easy to cross illegally. Most if it is either in very remote areas, or lakes and rivers and mountains. Both are monitored closely. The border is where it is because it was a defensible line dating from a series of UK-French, and later US-Canada wars. Canada faced a long running invasion, insurgency, and terrorism threat from the US through most of the 19th century.

There is still a smuggling problem, with most of it being drugs and guns from the US being smuggled into Canada in shipments of goods. The US import drugs from places such as South America and Asia, and organized crime gangs arrange for them, as well as US made illegal guns (mainly pistols), to be smuggled into Canada.

The biggest single problem is probably an Indian Reservation which straddles the border south of Montreal. It's technically two separate reservations, but the residents have special treatment from both countries which allows them to travel freely between them without passing through customs and immigration. Native organized crime gangs have heavily infiltrated local governance and police (they have their own police forces) on both sides of the border, and smuggling of everything from cigarettes, to drugs, to illegal immigrants is a major industry. Their proximity to Montreal and New York means they have very good transportation links to distribute their goods everywhere. Doing something about it means making coordinated changes to treaty arrangements by both countries with both reservations, but that is a hugely sensitive historic political issue so nobody has done much about it. Trump completely ignored it the last time around, so I doubt he'll do anything this time either.

thames Silver badge

Re: Bring it on

Canada produces about 3.5 million barrels per day of bitumen from the oil sands, and this number is prior to the new export pipeline to Pacific markets which started up this past spring. Prior to that production was limited by pipeline capacity. Canada also produces conventional crude from the prairies and off shore on the east coast, but two thirds of overall oil production (about 5.4 million barrels per day of all types) is bitumen from the oil sands of northern Alberta.

The bitumen is diluted with lighter oil (diluent) to help it flow through the pipeline. Smaller pipelines flow in the opposite direction to return the diluent from the ends of the pipelines back to the start so it can be reused.

There are also some plants which convert the heavy bitumen into lighter synthetic crude oil, but it's generally just cheaper to modify the receiving oil refineries to be able to use the bitumen as is.

Major US oil refineries on the coast of the Gulf of Mexico were designed to use very heavy oil from Venezuela. At about the same time as the latter's oil industry started circling the drain several decades ago oil sands production technology has progressed in Canada to the point where large scale production was profitable. As Venezuelan production fell, Canadian production replaced it in US markets. These refineries are designed around this particular type of oil and there aren't many alternative sources, so US tariffs against Canada will feed directly into higher prices to consumers in the US.

The new pipeline to the Pacific was built by the federal government specifically to try to diversify oil exports away from the US to reduce the economic and strategic risks of being too dependent on trade with the US. Due to the earth being a sphere, Japan, Korea, and China are reasonably close to BC in terms of shipping across the north Pacific. The first shipment of oil from the new pipeline went to a refinery in China, so the market is there.

thames Silver badge

Trump 2.0

This is just Trump looking for an excuse to try to use tariffs threats as a negotiating lever again. The previous time around he declared Canada and Mexico to be "threats to US national security" and slapped massive tariffs on imports from the two.

However both responded with tariffs of their own, carefully targeted against the districts and states of politicians whom Trump needed the support of, and Trump was force to cave and and back away with his tail between his legs. I suspect it will go the same way this time but only after extensive damage to all three economies.

Important Republican party members are already saying that they're not going to let Trump do whatever he wants on this. The biggest trade item for all three countries is autos and auto parts. The industry is so closely integrated in all three countries that the US auto industry would collapse if Trump were allowed to go ahead with it. The Chinese would be falling off their chairs laughing at the US self destructing on this.

You would think that Trump would have learned from his previous mistakes, but he's evidently learned nothing and forgot nothing.

And in case anyone imagines that Biden was somehow a paragon, he was just as protectionist as Trump, he was just a lot less stupid and self destructive in going about it.

This is the direction the US are going in regardless of who is in power, and it's why both Canada and Mexico have ongoing efforts to diversify trade away from the US. The US are not the future so far as Canada and Mexico are concerned, and it's things like this which is why.

Datacenters line up for 750MW of Oklo's nuclear-waste-powered small reactors

thames Silver badge

Re: "At 300 MW, these SMRs are not drastically bigger than Pickering's 500MW reactors "

Yes thank you, it should have been the new Hitachi SMRs are not drastically smaller than the reactors at Pickering.

As for build time, they did site prep work for all 4 units this past autumn. Construction of the nuclear works will start in early 2025, and the first unit is expected to be in commercial operation by 2029. This fairly quick, so the concrete work doesn't appear to be a serious bottleneck.

As for the type of steam turbines used, I don't think anyone really cares. Ontario gets the majority of its electric power from nuclear energy and has for many years, so the steam burbines seem to work just fine as is.

The UK AGR (Advanced Gas-cooler Reactors) were designed around the idea of being able to use high temperature steam turbines and higher thermal efficiencies, but the practical benefits of this were much less than the problems associated with higher temperatures. The AGR design proved to be a technological dead end, as has every other high temperature reactor.

Oklo's reactor is a liquid metal cooled fast neutron reactor. A number of countries have built these types of reactors over the decades and found them too complicated, expensive, and impractical. I haven't seen anything which would lead me to believe that Oklo's design will be any better.

The SMRs being built at Darlington are based on well proven technology. However, Ontario are also planning to build large nuclear reactors as well, again based on existing technology.

thames Silver badge

Re: "to develop new fuel recycling technologies."

France already recycle nuclear fuel, and other countries do as well .It's called MOX (mixed-oxide plutonium-uranium) fuel, and they use it extensively. This is reactor grade plutonium, which is a different isotope mixture than bomb grade plutonium (which has to be specially made for bombs in specialized reactors).

The reason it isn't done more widely is that currently uranium is so cheap that by most estimates is cheaper to use a once-through fuel cycle and just store the spent fuel until such time as uranium prices rise far enough to make it profitable to recycle the fuel. The French recycle fuel for reasons of energy security, so they don't have to import as much fresh uranium. They estimate that the costs of recycling are about the same as a once-through cycle plus long term storage. Most of the long lived radioactive elements in spent fuel are isotopes of plutonium which of course the French recycle back into fuel and get burned up in the reactor.

Canada has used MOX fuel made from surplus ex-Soviet nuclear weapons. However, this was more expensive to make than standard fuel (Canada uses natural non-enriched uranium fuel, which is cheaper than the enriched uranium which many countries use). It was only done in this case as part of an international agreement to dispose of surplus nuclear weapons left over from the collapse of the Soviet Union.

However, research has been done on recycling spent fuel from PWR reactors (a very common reactor style) and using it in Canadian designed CANDU reactors and derivatives (CANDUs are used in a number of countries). One method developed in South Korea involves simply chopping the spent fuel rods from a PWR into CANDU compatible lengths and welding the ends shut and feeding them into CANDU reactors. The fuel may be spent (used up) from the perspective of a PWR, but to a CANDU which runs on non-enriched uranium, this is high grade fuel.

The other method (developed by Canada) involves crushing the spent oxide fuel pellets from a PWR and blending them and mixing in fresh uranium to get a more consistent fuel mixture before reforming them into pellets and fuel rods and bundles. This is a dry process which is simpler and produces less waste than the conventional chemical reprocessing system used by France to produce their MOX fuel.

However, neither of these processes has been commercialized, again because uranium is currently so cheap and abundant that it's not economic to do so.

There are other fuel recycling methods as well, but they all run into the same issue of there being no market for it.

At this time the only thing that might make it worthwhile, aside from the national security reasons used by France, is if PWR reactor operators paid companies to recycle the fuel in order to take the waste off their hands. Using up the left over plutonium by recycling it gets rid of most of the long lived waste which would otherwise have to be stored.

Broadcom makes VMware Workstation and Fusion free for everyone

thames Silver badge

Re: Long Time VirtualBox User

I switched to KVM from VirtualBox mainly because VirtualBox VMs were randomly hanging. The final straw was that after an update VirtualBox wouldn't run at all and it took several weeks for another update to fix that. I haven't had any problems at all with KVM so far. However, I had been using VirtualBox for years and my experience with KVM has been a matter of months so far.

For what I'm using it for (software testing) both offer equivalent functionality. However, the virtual network connections for KVM were much easier to use than was the case for VB.

Page: