* Posts by thames

1377 publicly visible posts • joined 4 Sep 2014

Page:

Nvidia leans on emulation to squeeze more HPC oomph from AI chips in race against AMD

thames Silver badge

I'm not really sure what your point is. As the article states, IEEE floating point exceptions are designed to work a very specific way, and common algorithms are often designed with those methods in mind. Many times those algorithms rely on floating point errors flowing through to the end rather than checking after every operation. You can count on 1.0 + NaN producing a result of NaN, so you can check for NaN later rather than beforehand if that gives better performance.

If an emulation of floating point math works differently however, such that 1.0 + NaN produces a result of something other than NaN, then many well proven math libraries and algorithms may not work correctly with it in terms of error handling.

If you have an algorithm which requires checking at certain steps in the process, then that's part of that algorithm. However, you will still have the problem that if the emulation system doesn't behave the way you expected then you can get errors that your error checking doesn't know how to look for.

Hence my conclusion Ozaki will only be useful in very specific hand coded libraries handling very specific algorithms for very specific applications.

thames Silver badge

Re: "By the 1980s, FPUs were becoming commonplace"

Intel used to sell a separate math coprocessor floating point accelerator chip which you could buy and install in your PC. So the 8086/8088 for example had a corresponding 8087, and the 80286 and 80386 had corresponding 80287 and 80387 math chips.

The big market for the 8087 was people running Lotus-123. Applications had to be written specifically to recognize that the 8087 was present and to make use of it, and Lotus 123 was one of the few which did. Since Lotus 123 completely dominated the business spreadsheet market, and since spreadsheets were one of the main uses for PCs, the Lotus market was closely associated with 8087 sales. If I recall correctly, you could buy a package which included both Lotus-123 and an 8087 chip together. I don't know if that was direct from Lotus however, or if it was something that distributors put together.

I was under the impression though that the 80486 had floating point math built in as standard. The 486SX actually disabled the on chip floating point unit so they could sell it at a lower price without affecting sales of the higher priced standard 80486. If you then bought a 487SX and installed it later, it actually was a full 486 chip which disabled the 486SX and took over all of the CPU duties.

thames Silver badge

Being compliant with things like NaN (Not a Number) and +/- infinity is actually pretty important with floating point. I have done a fair bit of work with floating point SIMD (CPU based, not GPU) on large arrays of data and the "proper" way to deal with errors in most cases is to let them flow through to the end and check for them then rather than to check as you go along. NaN and infinity handling is designed so that once you get one of them as a result it continues to propagate through the math. Doing the check at the end results in insignificant error checking overhead, while doing it as you go along results in a lot of overhead and a significant performance hit.

What this means is that if you emulate floating point, if you don't handle NaN and infinity the same way as is "normal", people may have to come up with entirely new algorithms at the application level. It also means that well proven math libraries may work most of the time under emulation, but give incorrect results for edge cases. Figuring out what those edge cases are is non-trivial once you are dealing with applications rather than benchmarks.

The performance advantages of having errors flow through in a predictable manner are so great that as I understand it, some of the hardware people at CPU companies are talking about introducing similar features for integer math to deal with overflow, although I have no idea how this would work. Traditional simple overflow trapping apparently is somewhat problematic with instruction pipelines, instruction re-ordering, and SIMD.

As for this emulation system, I can't realistically see it being used outside of very specific hand coded libraries handling very specific algorithms for very specific applications. They're really competing against SIMD, and the latter has been getting better as well.

Hyperscalers, vendors funding trillion dollar AI spree, but users will have to pay up long term

thames Silver badge

Re: "I have another 20 years to monetize that customer,"

You're talking about hardware costs. He is talking about service contract lock-in. He is assuming that LLM AI will increase vendor lock-in. Salesforce hope to be able to increase prices at will for locked-in customers, who will find it very difficult to duplicate those custom features with another vendor.

Have a look at VMWare's business strategy, which is squeeze locked-in customers until their pips squeak. That's the plan for the AI business model.

How CP/M-86's delay handed Microsoft the keys to the kingdom

thames Silver badge

Re: Siemens S5-DOS/MT

Siemens was and is a big company. They made everything from mainframes to nuclear reactors. One of their biggest divisions was and is industrial controls, where they were the world leader.

There are various sources of information on line, but I'll stick to the known safe ones (as opposed to ones offering who knows what in terms of downloads) such as Wikipedia.

Here's an article on the Simatic product line, which was Siemens' name for their programmable industrial controls.

https://en.wikipedia.org/wiki/Simatic

Here's a photo of the PG 675 computer which is better than the PG 685 used in the article, even if it is a bit grubby. "PG" was Siemens' term for the programming computer.

https://commons.wikimedia.org/wiki/File:Siemens_Simatic_S5_PG_675.jpg

In the main article, scroll down to the section on "Step 5". It mentions there that the OS for the PG 630 was CP/M.

If you scroll down to "History of STEP5", there is a table which shows that from v1.0 to v1.4 it ran on an unspecified version of CP/M. From 2.0 to 3.2 it ran on CP/M-86. Version v6.3 ran on MS-DOS on the PG750. I think the table is not complete, so there may be other version numbers.

I know however that you could also run STEP5 on an ordinary laptop under MS-DOS (or Windows 95). I also recall that the earlier versions ran on some sort of CP/M emulator in order to run on MS-DOS, but I don't know the details. The very late versions (I think somewhere around version 6 or 7) were ported directly to run natively on MS-DOS.

As for why Siemens used CP/M, when they first came out with the software MD-DOS didn't exist yet. CP/M on the other hand was the de facto standard operating for business microcomputers.

As for the PGs themselves, you will notice in the photo that they have an extra row of function keys below the standard F1 to F8. These have special symbols which are used by the STEP5 programming software. You will also notice a deep socket to the left of the floppy drives labelled "module". There is a corresponding socket on the photos of the S5-95U and S5-103U CPUs. This was for a ROM module which you could burn so you didn't need a backup battery to keep the program in RAM. Most people didn't bother with the ROM module and just changed the battery every year.

The PDF that you linked in the story for Siemens S5-DOS/MT is actually for the MS-DOS version of the programming software. I don't know if at this point it was running under an emulator or was a native MS-DOS port, but the host OS in that manual is definitely MS-DOS. If you go to chapter 5 it talks about the included utilities for reading and writing "PCP/M" floppy disks so you could exchange data between older PGs (which used CP/M) and newer ones running MS-DOS. I suspect these are licensed third party utilities. The manual by the way has a very good explanation of how to optimize memory management for MS-DOS. Siemens manuals from that era were excellent in terms of providing technical detail even if they did have a tendency to use their own names for things.

As for whether Siemens also sold FlexOS, I wouldn't dispute that. There are lots of different applications in industry, and you can outfit pretty much your entire plant with just Siemens control kit. That however would have been used in some sort of dedicated application rather than the type of programming PC such as I was describing.

They currently sell their own industrial Linux distro, based on Debian, to fill that niche.

https://support.industry.siemens.com/cs/document/109988870/simatic-industrial-os-4-2-%E2%80%93-the-operating-system-for-applications-in-the-industrial-environment?dti=0&lc=en-CA

thames Silver badge

Siemens S5-DOS/MT

The article mentions Siemens S5-DOS/MT. Industry ran on Programmable Logic Controllers (PLCs), and Siemens was the industry leader with the biggest market share. At that time their main product line was the S5 series, which covered the full range from small to large with different models and configurations. I did a lot of work writing programs for equipment controlled by various S5 models.

The programming software (development software) was STEP-5. This was what amounted to the editor and compiler all rolled into a single highly specialized graphical IDE if you were thinking in computer terms. STEP-5 was apparently written for CP/M. They sold portable computers intended specifically for programming the S5 series, including an interface card to connect to the PLC and a socket which you could use to burn their ROM modules (a big orange block which you inserted into the PLC).

For customers who wanted to just load the software onto their own computer instead of using a dedicated Siemens programming system, they offered a version of STEP-5 which ran on MS-DOS under some sort of CP/M emulator. I don't know the details of that as they were never very specific about it, so I don't know if it was something they created themselves or whether it was a third party product. Siemens were very big on relabelling licensed third party stuff with their own brand name.

Very late in its life they came out with a native MS-DOS version.

The successor to the S5 series was the S7, and the programming software for that was native Windows software. By that time Siemens were huge Microsoft fans and they were always pushing the latest fad from Microsoft into their products across the board, only to watch that feature become obsolete and replaced by something else. As a result there was a lot of product churn. We were able to mostly avoid that as we had a rather jaundiced eye when it came to using proprietary features in systems that were supposed to last a couple of decades.

Zuck forms Meta Compute to pave the planet with 'hundreds of gigawatts' of AI datacenters

thames Silver badge

Not much to show for it

El Reg said :"However, compared to its competitors, Meta doesn't have much to show for all of its spending."

Eh? none of the companies involved in the AI bubble have much to show for the amount of money they have sunk into this so far.

Furthermore, none have made any convincing arguments for how they are going to make a profit out of it in the future either. The capital and operating costs are simply too high for the rather meagre results the technology is capable of.

I suspect that before too long success will be measured in terms of who failed quickly enough to avoid sinking too much money into it.

Imagine there's no AI. It's easy if you try

thames Silver badge

Re: That's not survival. It is an unnecessary nightmare.

I could see structural batteries in things like high end phones and tablets to make them a tiny bit smaller or have a slightly larger battery. Phone cases aren't really that strong anyway.

It doesn't make much sense in cars though. The structural parts of a car are designed for strength and the trend there is for stronger steels to reduce weight. You aren't likely to be able to create a battery material which can match high strength steel when it comes to strength and durability. You also have the problem of getting electric power from all over the frame to the motor power pack. A car is big enough that you can always find places to put the battery.

Cars are big objects so it's always going to be worth while having the frame be the best possible frame and the battery be the best possible battery and so have these as separate components.

It's when you get to small objects like phones where small size is important that combining functions makes sense.

Microsoft wants to replace its entire C and C++ codebase, perhaps by 2030

thames Silver badge

One more thing

This is one more thing to add to your list of "things that are not going to happen".

Microsoft have yet to demonstrate replacing even one significant program that they derive revenue from by using AI to rewrite something that was in C++ to something written entirely in Rust. They have re-written bits of programs in Rust, but that's it.

One programmer is not going to produce 1 million lines of finished code every month, AI or no AI. Of course they may get an AI to submit variations on the same 10 lines of code 100,000 times in an effort to hit a numerical target, but that isn't going to get them to an all-Rust program.

What they are doing sounds more like a desperate effort to promote customer use of AI than a serious goal for themselves.

US punishes China’s ‘dominance’ of legacy chips with zero percent tariffs

thames Silver badge

Re: The result of an investigation into China’s semiconductor industry

There's several things going on. One is that the main market for these older chips is in low cost electronic goods, of which China is the biggest manufacturer. The chip plants are located where their customers are. There's a huge market for this stuff because it's cheap and good enough.

The other thing is that US has spent a great deal of effort in preventing Western companies from selling the latest chip making equipment to Chinese companies, only allowing stuff for making older generation chips of the type we are talking about here. So, Chinese chip companies had little choice except to stick to making lower end chips based on older chip technology. Washington patted themselves on the back of this brilliant success.

Then the US observed that Chinese companies were making lots of older generation chips. The American government studied this carefully and decided that this focus on older generation chip technology instead of buying the latest Western equipment was due to some deeply laid plan which the Chinese government have cooked up.

I'm sure this US strategy makes sense to someone.

All aglow about DCs, investors launch $300M at microreactor startup

thames Silver badge

It's not really an SMR, it's what is known as a "micro-reactor". If you go to their web site they say that it is intended to replace diesel generators which currently power remote communities and mines. This is why it's built as a single container size unit.

There's a whole separate market for micro-reactors. They are competing against diesel generators in terms of cost, and for diesel the big cost is often that of shipping the fuel in. In many places these shipments take place once per year.

Numerous attempts at using wind and solar in these applications have generally been unsuccessful. Due to their intermittency, they can't actually replace diesel, just supplement it. Due to their rapid variability (fluctuations over minutes or even seconds) they can only provide a small percentage of the total power (10 to 15 percent is typical in various projects in Canada, even with storage). There is no huge grid to absorb variability and diesels are limited in their ramp rates without damaging them, so you can only use them to compensate for the variability of wind and solar to a very small degree limiting how much wind or solar you can have. You also can't have diesels run at idle for extensive periods of time without carboning up inside. You need to put a minimum load on them.

Also fuel efficiency of diesels drops with reduced load, so your fuel savings are much less than the amount of power generated by wind or solar. On top of that, saving a few percent of fuel may not save you any money because much of the cost is shipping, and you have to have that same barge or ice road shipment come each year regardless of how much they deliver.

The ideal solution is hydro electric, but that is site specific, and building very long transmission lines is not practical if there isn't a suitable site nearby. This is why it isn't used much for small communities.

What micro-reactors offer is something that can completely replace diesels, something that wind and solar have not been able to do despite very large sums of money having been invested in numerous projects for this. There are several hundred communities and mines in Canada alone which depend on diesel and which are a potential market for micro-reactors.

The company in this story have contracts to supply the US military with micro-reactors to power remote bases. For many remote US bases, the main logistics burden is shipping diesel.

As to whether this company's product works as good as they hope it will, nobody knows yet. They hope to build a fuel demonstration project next year. That apparently won't produce electric power, just test if their proprietary fuel design works like the simulations say it will.

As for using these reactors to power huge AI data centres, I don't see it being practical. They operate on a completely different scale. I suspect the AI industry interest in this sort of technology is as a form of green washing, just like with that study that claims that wind can power data centres. When you dig into the latter it turns out that the actual plan is actually to have a big gas turbine on site plus a promise to buy some wind power from offshore wind farms located somewhere. With these micro reactors, if the AI money is still around when they are ready for market, then I suspect they will buy a couple and issue press releases about them while actually getting most of their power from gas turbines.

So as far as this reactor is concerned, it looks interesting if used for what its designers intended it for. It's not meant for the AI industry however.

FreeBSD 15 trims legacy fat and revamps how OS is built

thames Silver badge

Tried it

I installed FreeBSD 15.0 on Tuesday and have had zero problems with it. I run it in a KVM VM on Ubuntu 24.04. I run it as a server without a GUI and gave it 1 GB of RAM and it seems quite happy with that running my usual test routines (I use it to run automated software tests).

The one big surprise was that it is still on Python 3.11 when every other major distro seems to have 3.12 or later (the latest version of Python is 3.14). Python 3.11 is more than half way through it's support life, no longer receives bug fixes, and drops out of all support, including security fixes, in two years.

Distrowatch says that 15.0 ships with Python 3.12, but my test scripts are clearly reporting 3.11.13. I have to wonder if something went wrong here.

US Navy scuttles Constellation frigate program for being too slow for tomorrow's threats

thames Silver badge

Re: This isn't to speed up delivery to the fleet

All or nearly all new frigates and destroyers are powered by either a combination of diesel and gas turbine, or just diesel. The ships cruise on diesel for economical operation and gas turbines are fired up for high speed operation. The newest ones tend to have an electric drive system, where the diesels and gas turbines drive generators and the generators feed electric motors which turn the propellers. Any engine or combination of them can feed any electric motor. On the slightly older ones the gas turbines may be able to be coupled directly to the propeller shafts when needed instead of going through the electric drive system, but they only are used when high speed is needed.

The replacements for the Burke class destroyers will look fairly similar to the Constellation class ships, but much bigger and with more vertical launch cells for missiles.

The differences between ships these days are mainly in the details. These include things like damage control and fire fighting arrangements, size of magazines, and the electronics fitted. The combat systems can cost more than the rest of the ship put together.

The Burke replacement will apparently displace about 14,500 tons and have 96 vertical launch cells, as compared to half that displacement and a third of the number of vertical launch cells for the Constellation class.

The Constellations were supposed to be an off the shelf quick fix for the failed LCS program. However, the US ended up making so many changes that only about 15 percent of the original FREMM design was left by the time they were done with it. This sort of defeats the whole purpose.

Because the Burkes (and planned replacements) are so large and expensive, the US wanted a smaller frigate to give them the numbers to cover places in the world where attention is needed but the threat level is lower. They also wanted them quickly to make up for the time lost pursuing the LCS dead end.

It's hard to say where things are going now, unless someone has a replacement already lined up which is ready to build. Given how that sort of thing tends to leak in the US, I would be very surprised if that were the case however.

thames Silver badge

Re: You know what they really need...

Pretty much all modern naval vessels of intermediate size can take containerized mission systems. The main armament goes in permanent mounts, but the specialist stuff that a ship only needs occasionally goes in containers.

thames Silver badge

Re: This isn't to speed up delivery to the fleet

Canada chose the Type 26 over the FREMM because it wanted the best there was and was willing to pay for it. It was what the RCN wanted from the beginning and they were willing to write the rules to allow the Type 26 to be considered "off the shelf" even though construction hadn't started yet in the UK, they had that much confidence in the UK's ability to design ships. All of the ship designs looked at by Canada were European by the way, the US had absolutely nothing that was worth considering. The US have fallen quite badly behind in terms of naval ship design and construction methods.

The US chose the FREMM because they wanted something cheap and off the shelf to put into production immediately to fill the yawning void in their fleet. They need something cheap enough to be built in numbers that could be sent to secondary areas, as the Burke class and its planned successor were seen as too expensive to be used anywhere except as part of their front line fleet.

The British Type 31 with an Mk41 launch system and ESSM missiles instead of Sea Ceptor would be pretty much what the US were originally looking for before they decided to change everything.

US naval shipbuilding is an utter shambles, with major problems in their frigate, icebreaker, and submarine programs. They recently bought an icebreaker design from Canada and Finland to reboot their disastrous icebreaker program, we'll have to see if they completely stuff that up by redesigning everything as well. Australia's plans to buy some second hand US nuclear submarines to fill in the gap until the AUKUS subs hit the water are in severe doubt as the US cannot currently build submarines fast enough to replace the ones that they have to retire due to age, so they may have none to spare when the time comes.

By cancelling the Constellation class ships the US are simply digging themselves deeper into a hole they are already shoulder deep in, as they have nothing ready to replace it with.

Britain plots atomic reboot as datacenter demand surges

thames Silver badge

Re: Hardly makes us meatbags feel better ...

UK interest in reviving nuclear power predates the AI bubble. It's based on the goal of the total electrification of society in order to meet environmental goals.

What happened is that reality sunk in and people realized that there is no path to "net zero" which involves wind turbines. Wind turbines are joined at the hip to fossil fuels and will be forever. Solar panels are the same.

If you look at which countries in Europe have low carbon emissions in their electricity sector, it isn't the ones which depend on wind/gas. It's the ones which depend on hydro-electric and/or nuclear.

So if the UK genuinely desires to save the environment, then it either needs to find a continental scale high mountainous plateau somewhere in say Norfolk and built hydro-electric plants there, or else build enough nuclear power plants to power Britain. The latter sounds like the more realistic plan.

thames Silver badge

Re: Good but ...

The fundamental issue is the structure of the electricity market. To start with, it isn't a natural market. It's an artificial construct which is intended to emulate a real market but is in reality a very convoluted system of regulation intended to produce a pre-determined outcome.

What it does is optimize for short term profits rather than long term low cost or for stability or reliability. Since there is no long term security, investors optimize for short term profits. This also means that long term investments have to pay higher interest costs because they have no guaranteed market. This is the real reason why the UK has built gas turbines / wind farms (the two are joined at the hip) rather than nuclear power plants. The real money is in the subsidies and offsets, producing reliable supplies of electricity at the lowest possible cost is a mug's game.

What the UK needs to do is to admit that the "deregulated" electricity market experiment has failed and move on from it.

Amazon-backed X-energy sweet talks investors into another $700M for small modular reactor dream

thames Silver badge

Fuel Shortage

The problem that both X-Energy and Kairos have is where to get their very special fuel enriched to higher levels than normal commercial fuel. Currently the only commercial supplier is in Russia, which is problematic.

American and European suppliers are reluctant to build plants to supply this as there aren't currently any customers for it. It's a chicken and egg problem.

And of course from a customer perspective there is the issue of vendor lock-in with proprietary fuel designs.

Reactors using normal commercial fuel (low enriched or natural unenriched uranium) don't have this problem.

DragonFire laser to be fitted to Royal Navy ships after acing drone-zapping trials

thames Silver badge

Re: More missiles

Phalanx is becoming obsolete mainly because missiles are getting faster which results in shorter engagement times. So simply hosing the sky with metal is starting to be a losing proposition. Goalkeeper faces the same problem.

Stuff like 40mm guns with timed fuses allow for longer engagement ranges and so more time in which to try to hit the missile before it arrives.

This is why 40mm guns (Bofors, 40CTA, and others) are making such a come back in newer designs now. 30mm guns with timed fuses (e.g. AHEAD) are similar, but probably not quite as effective. Ships may have both 30 and 40mm, so it's not one or the other. There are self contained models which just need power and have their own radar or other sensors, and have non deck penetrating mounts. However, integration into the ship's CMS is more effective because it can take advantage of the better sensors.

These guns will also make mincemeat out of aerial and surface drones in self defence situations.

As for missiles, yes I agree that a million dollar missile is still cheaper than a billion dollar frigate if it allows a ship to effectively be in two places at once.

thames Silver badge

Re: More missiles

If the drone is 30 miles away, then all it needs to do is to stay low and it will be below the horizon.

For anti-drone self defence work, I think 30mm or 40mm guns with time air burst anti-drone ammunition (off the shelf in both cases) will make mince meat of pretty much any drone in a self-defence situation. 20mm Phalanx is very outdated and being phased out in most navies.

thames Silver badge

Re: How long before ....

More plausible is to have the drone fly low and keep a coastal city or a concentration of commercial shipping in the background. Using a laser to shoot down a drone under those circumstances would be like Assad dropping gas on cities because there happen to be some rebels hiding there.

I have a fair bit of experience with industrial welding lasers and they are an absolute pain to deal with because of the great lengths that are required to maintain safety. The beam can undergo multiple reflections and still be dangerous to eyes (the main threat) so there's simply no way to operate them safely in open air in a factory. This makes troubleshooting and maintenance very difficult and only the company's very best tradesmen had any hope of keeping them going. As a result of this laser welding was something we did only as a last resort if no other process was able to do the job.

UK Covid-19 Inquiry finds early pandemic surveillance was weeks out of date

thames Silver badge

Re: Scamdemic

"Shield the vulnerable" wow, how insightful, especially as everybody was considered vulnerable.

The IT department could prevent all IT security problems by simply closing all security vulnerabilities instead of telling users to not click on dodgy email attachments. Now why did they need me to tell them that? What utter fools they are! I call this The Great Blackpool Declaration". Make sure to cite it as the answer to any and all security problems whenever another IT security story is posted.

thames Silver badge

Re: Wasting my taxes

Yes, your doctor will tell you to stop smoking, lose weight, take up exercise, and get all your vaccination jabs. He'll probably tell you to do that without you even having to ask.

thames Silver badge

Re: Scamdemic

So the financial markets saw the pandemic spreading and knew it was going to be really bad? Well, so did everybody else who wasn't hiding their heads under the covers and hoping it would just all go away somehow.

thames Silver badge

Re: Scamdemic

Sweden eventually admitted that the "Swedish approach" was an utter failure and implemented pandemic restrictions. The Sweden that did nothing and was just fine existed only in the imaginations of certain people.

thames Silver badge

Re: Scamdemic

All the "Great Barrington Declaration" (which few people who like to cite it ever bothered to read) said was that they didn't like the pandemic and they wanted someone to just make it all go away somehow. It was a list of grievances with no actual practical proposals for any alternatives. They said that coming with ways of making their "plan" work was someone else's responsibility, they were just the ideas people.

The IT equivalent to the "Great Barrington Declaration" would be for users to say that usernames and passwords are too much bother and they should be gotten rid of. What, security? That's IT's problem. It's up to them to figure something else out.

thames Silver badge

Re: Scamdemic

And here's another tip. Never, ever, believe anything posted by an "Anonymous Coward".

TP-Link accuses rival Netgear of 'smear campaign' over alleged China ties

thames Silver badge

Re: A similar thing happened with Supermicro in 2018

The story originated with Bloomberg, who mainly cited "anonymous sources". They named only one source for the claims, a security researcher. The mainstream press then simply took the Bloomberg claims and reprinted them.

An Australian security podcast however decided to do some actual journalism and tried to verify the claims by asking the sole named "source" about it. That person said he has seen no evidence to support the claims and that Bloomberg had misrepresented him, and that he found the whole thing highly implausible.

In other words, Bloomberg's story fell apart as soon as anyone did even minimal checking. The mainstream press however continued to pretend they hadn't heard that.

As to why that story was created, apparently Bloomberg writers (I'm afraid to call them journalists) have their evaluations and bonuses at least partially based on whether their stories "moved the market" (affected share prices). So they have a strong incentive to publish stories that cause share prices to go up or down. If someone were to feed them a dramatic story which would affect the share price of some company, they don't have a lot of incentive to question it.

Google and Westinghouse lean on AI to speed US nuclear plant builds

thames Silver badge

AI?

I suspect there's very little if any AI involved in this software. More likely is that they called optimization algorithms "AI" for marketing purposes.

Datacenter fossil fuel habit 'not sustainable' as AI workloads soar

thames Silver badge

It's the AI Bubble That Isn't Sustainable

It's the AI bubble that isn't sustainable, from a financial perspective at least. Electric utilities would have to be mad to commit to installing generating capacity that will never be used by the AI companies when the AI industry collapses.

As for that study that said that renewable energy would be cheaper, that "renewable energy" turned out to be gas turbines at the data centres combined with vague promises to buy electricity from wind turbines in the North Sea and delivering it via the grid that apparently can't deliver it anyway. It's just more green washing.

Why Elon Musk won't ever realize the shareholder-approved Tesla payout

thames Silver badge

Re: What's next?

The fantasy is the current belief by shareholders that Tesla is worth more than nearly the entire rest of the auto industry put together world wide despite being only a minor player in terms of sales. Yet that's what their current market capitalization implies.

There are only so many people in the world who can afford to buy cars. Tesla would have to have a near monopoly on world wide auto sales to justify their current stock price, and that is never going to happen.

A more realistic long term value for Tesla's shares would be for them to decline drastically.

Britain's first small modular reactors to be built in Wales

thames Silver badge

Re: Omdia not Very Omniscent - SMRs are currently under construction in Canada

No, the SMRs being built in Canada and announced in the UK in the story are very conventional designs based on well proven technology scaled down and simplified and will use commercially available fuel. These are full scale commercial plants. There's no reason to expect any issues with the engineering or technology. This is not experimental.

The US plan to build some experimental SMRs based on unconventional technology and which use proprietary fuel for which there are no existing commercial suppliers. These reactors will not necessarily even have a steam system connected to them, just cooling systems and may be scaled down in size to below what would be utility size SMRs. They may be located at US nuclear research labs, which would allow them to bypass normal US licensing (which is very slow and bureaucratic compared to the UK or Canada). These are intended to prove that the technology works. If these work then they will be followed by larger commercial versions, assuming they find a buyer. These are pretty clearly what he is talking about with respect to test or experimental reactors.

Most of the proposed data centre SMRs are of these unconventional designs. As I said, I suspect that few if any of the data centre proposals for this type will ever be built. Utility SMRs based on conventional designs are finding commercial buyers, as seen in the story.

thames Silver badge

Omdia not Very Omniscent - SMRs are currently under construction in Canada

I can't find the Omdia report through Google, but if the quote given is representative of the report, then Omdia don't seem very aware of what is happening with SMRs.

A plant with four 300 MW SMR units started construction in Canada just east of Toronto last year. The first reactor is expected to start operation before 2030. This is for a utility customer who currently operate large nuclear reactors.

As for data centres building their own SMRs, I'll believe it when I see it. The AI bubble is about to pop and when that happens capital spending on data centres will get cut back drastically.

We still need reliable electric power to light our homes and businesses, and for an increasing number of people to charge our cars, and for that nuclear power will see increasing use. That I believe is where I expect to see most SMRs built, and these will like the reactors announced in this story be mainly designs based on conventional and well proven reactor technology scaled down and simplified rather than the exotic technologies many of the AI data centre operators talk about.

UK's Ajax fighting vehicle arrives – years late and still sending crew to hospital

thames Silver badge

Re: Situational awareness

@Jellied Eel said: So Ajax is supposed to be a recce vehicle, which used to mean sneaky.

No, there's two types of recce. One is "sneaky", but a lot of that work will be done by drones now.

The other is medium forces which are supposed to operate in a dispersed manner ahead of and around the main heavy forces to seize strategic points which the main forces rapidly follow up on. In British army terminology this comes under "recce", while in the US they are calling it "cavalry". It's the same basic concept however.

The point is to avoid excessive concentration of forces which can be readily found and attacked by smart weapons such as missiles, and now drones. The need for this was foreseen years ago, and is one of the reasons that Ajax was developed.

One of the things needed for the idea to work is some sort of direct fire weapon without having to take along a full sized tank. This is the reason that the 40mm CTA gun used in Ajax was jointly developed with the French (who, along with the Belgians, are also using this gun).

The Americans were involved in the early days of this gun as well, and so it was designed to drop into the same space as the 25mm gun in the Bradley vehicle to give it an equivalent upgrade. This was tested, and it fit as planned. However, the Americans had their usual fit of "NIH" plus "make bigger" and decided they wanted a 50mm gun based on an upscaled version of their 25mm gun. This ended up being so huge that it had no hope of fitting in the Bradley so now they are developing a whole new and bigger vehicle to carry it. The American experience with this has made the British Ajax look like a model of respectability.

So, there's a reason for all this. The Ajax project could have been better managed, but part of the problem had to do with a political desire to make sure the project didn't go to BAE due to a perception that the company was taking UK business too much for granted.

Instead the contract went to GD, who built the hulls in their Spanish factory where quality control was appallingly poor.

The vague "vibration and injury" stories that you read are mainly down to the noise cancelling headsets which aren't working. The ones that GD recommend and provided apparently work fine. The ones that the UK MoD provide though are apparently not suitable for the application, and a lot of the ones issued for the testing program are apparently simply broken when issued and so don't work at all. The latter is down to army internal procedures. These sorts of headsets are standard practice for armoured vehicles.

thames Silver badge

Re: Armchair Journalists

Anti-tank weapons were developed and deployed in WWI, as soon as the Germans encountered tanks. They developed and deployed the 13mm anti-tank rifle, which would punch through the thin armour of any tank of that era. They also developed the first heavy calibre machine guns using that same 13mm round, which was also intended as an anti-aircraft weapon ("tank und flieger").

The Germans also changed tactics, moving lighter field guns up close to the front lines where they could operate in direct fire mode. They were capable of utterly destroying any tank of that era with one hit.

In the period immediately following WWI, pundits declared that the tank was obsolete now that weapons had been developed to counter it.

None the less tank development continued, and here we are more than a century later and tanks are still with us.

Cybercrims plant destructive time bomb malware in industrial .NET extensions

thames Silver badge

Re: Eh?

I said several times that this could cause economic losses. What I was questioning were the claims that it would cause the safety problems claimed.

If the safety systems do not prevent injury or loss of life to the operators, they are not effective. Where I live you have to get the safety systems signed off by a independent licensed professional engineer who has no incentive to sign your design package if it isn't right. In my experience he does not even look at the PLC program as it's irrelevant to his review. The plant safety committee, which has both management and worker representatives on it then has to review the safety report before approving the machine to operate. Until then it stays locked out.

In the story it said that "IPandya said this could lead to safety systems failing to engage, actuators not receiving instructions, and other consequences." That's not how machines work. Safety systems do not disengage or engage under the control of the PLC program. They are always active and operate in parallel with the PLC. Actuators do not receive instructions directly from the ERP system, they are part of the PLC program which has interlocks which operates them at the appropriate point in the sequence and doesn't operate them if the sensor states are not correct.

A safety system starts with the assumption that there are bugs, both software and hardware, in the rest of the control system. There is nothing the software in question in this story can do to cause a hazard which wouldn't be already addressed in the overall system design.

The issue here is one of economic losses. This isn't really different from ransomware attack.

thames Silver badge

Re: Eh?

Actually the PLC programmer assumes that safety is being handled at the next level down in the control system by dedicated safety devices (e.g. safety relays, light curtains, etc.), so there's at least two layers between your software and the actual hazard.

This sort of kit became standard at least 2 decades or more ago. At the PLC level the program has to monitor the safety devices to take into account when the safety system isn't going to allow it to do anything (to avoid signalling spurious faults), but the safety system is determined by how this safety kit is physically wired up and installed.

The real problem that the malicious software in the story presents is economic losses from downtime. The machine's overall control design by the way should have a means of entering recipes manually through an operator panel in the event of an MES/ERP failure or other downtime. At least this is what I am familiar with.

A lot of the industrial security reports seem to be written by people who are familiar with IT security but little or no experience with how kit is actually applied in industry.

I can remember MS-DOS systems with data logging from PLCs being handled by Lotus 123 spreadsheet macros talking to AB PLC-5s via some sort of drivers and a special AB proprietary communications board. I was not responsible for that abomination I hasten to add, but they were not that uncommon. I did do work on the PLC, mainly machine cycle time optimization. You always assumed that Lotus 123 or something else was going to screw up, so there was a handshake between the two ends to ensure that the data was saved.

When MS-DOS and Lotus 123 were replaced by MS Windows, Excel, and DDE, it didn't really get any more reliable. Stuff actually written for the purpose and logging into an actual database is probably a lot less failure prone.

thames Silver badge

Eh?

Having some experience with programming PLCs and designing industrial safety systems, based on the description of how it works I am struggling to see how the described methods can trigger the safety problems stated.

These are basically libraries for connecting databases to PLCs. There are typically two things you do with this. One is to log production data for later analysis. The other is to download "recipes" which are sets of numbers telling the PLC things like what torque setting to use in a screwdriver spindle on product 'c' versus product 'd'.

What you don't have are settings that will blow up the factory. Such a thing would fail a basic safety review of the equipment design.

You always assume assume that a read or write operation from or to a PLC could fail and take that into account in your equipment design. Factories are electrically noisy and unforgiving places, and communications failures are assumed to be part and parcel of operating there.

Furthermore, standard PLCs are not rated as safety devices (there are dedicated safety PLCs, but these are not what we are talking about here). A PLC is a programmable electronic device running a one-off custom program and a basic safety design assumes that there is potential for failure due to software bugs or failures in the semiconductors. You always assume the PLC program has lots of bugs of its own.

Actual safety systems operate outside of and in parallel with the PLC. If the PLC tries to do anything which would cause a safety issue the safety system overrides it. The PLC could try as hard as it likes, it isn't going to be allowed to do the thing that would blow up the factory.

What these packages can do is cause downtime and economic losses while someone troubleshoots the problem. That could perhaps be the basis for a cyber-extortion threat, but I don't know enough about that business to try to guess how the initiator would know who his targets are and how to make financial demands on them.

Famed software engineer DJB tries Fil-C… and likes what he sees

thames Silver badge

Re: Vs Rust

Rust doesn't address the issue of what to do about existing code bases. Re-writing massive amounts of code in another language and reintroducing whole classes of bugs that were found and fixed years ago in the existing one isn't realistic. This problem is what Fil-C and other similar projects are trying to address.

I suspect that projects like Fil-C will pioneer and demonstrate concepts which will get incorporated into mainstream C compilers eventually.

thames Silver badge

Re: Type checking and compatibility

There is no such thing as a one size fits all programming language. There's no such thing as a programming language which is better than all other languages at all fields of application. This is why we have multiple programming languages. You need to learn multiple programming languages and use each in places where its strengths reside.

One of Python's great strengths is its ability to be combined with C so that you get the advantages of both, the rapid application development and concise code of Python, and the high performance of C in focused elements where it really matters.

As a result of this there are a lot of Python libraries which are written in C (and I happen to maintain a few).

The Rust fans want us to "just re-write all your existing software in Rust", which is going over like a lead balloon with people who have large existing code bases that are working fine with no errors having been reported in years. The last thing I want to do is to re-write lots of code and introduce new bugs which weren't in the old stuff.

And on top of this Rust is heavily tied to LLVM, which doesn't have the wide variety of extensions that GCC has, which I use to access the CPU specific features which are necessary to get high performance. Apparently what I am supposed to do is re-write all those bits in assembly language. That's just great - apparently the answer to memory bugs is to re-write most of my software in assembly language, put a Rust wrapper around it, add in inter-op code so it can be used from a C program (the Python run time), and call it "more secure".

What is really needed is something that is very similar to C but which can deal with a lot of the common memory management issues, which isi what things like Fil-C are trying to do. I am following this with a great deal of interest to see if these ideas make it into standard C.

thames Silver badge

Re: Type checking and compatibility

@MarkMLl said: "As I understand it Python v3 does the exact opposite: variables are initially untyped but an expression has a type."

What many people think of as variables in Python are actually references to objects. This is a very important distinction because Python is very much an object oriented language.

I can do x = 5 followed by x = 'a' because x is neither an integer nor a string, it is a reference to an object and it is the object which may be integer or string (or whatever). Another way of looking at it is that you are actually effectively dealing with pointers rather than variables.

However, if you try to do something like 5+'a' then you get an error which says TypeError: unsupported operand type(s) for +: 'int' and 'str'.

Compiler errors in Python are generally related to syntax, e.g. SyntaxError: invalid syntax.

Static languages such as C must figure out what machine code operations they must execute at compile time. This is why data types must be defined precisely. The machine code for adding two floating point numbers together is not the same as the one for adding two integers together.

Dynamic languages such as Python defer such decisions until runtime.

Python is however very memory safe because you don't allocate memory, you just create references to objects and the objects figure out for themselves how to allocate the proper amount of memory for their needs.

Canonical CEO says no to IPO in current volatile market

thames Silver badge

I suspect that it is more a reflection of that an IPO takes time to go through, and the IT industry is in the midst of an AI bubble that is getting ready to pop. When the AI bubble pops it is likely that the entire stock market will take a nose dive.

The US financial sector is looking more than a bit shaky at this moment, with senior executives of major financial companies warning that there are a number of rotten firms (not their own of course) that are on the verge of collapse due to sub-prime loans again.

So we're looking at a combination of the dot-com bubble collapse and the 2008 financial crisis happening together. That's not exactly the best of times in which to launch an IPO.

India to dethrone US for dev numbers as AI reshapes coding, says GitHub

thames Silver badge

Re: Very dubious numbers

They are claiming that there are 28 million software developers in the US. The US labour force is 170 million. So they say that 16 per cent of the US labour force work as software developers. I found that claim to be a bit doubtful.

Next they claim that the number of US software developers will rise to 54.7 million within 5 years, resulting in nearly 1 in 3 workers in the US being a software developer. This is even more implausible.

Official US statistics give numbers that are roughly 15 times smaller.

What is more probable is that this "report" was created using an AI which simply made the numbers (and everything else) up out of thin air.

OpenAI tells Trump to build more power plants or China wins the AI arms race

thames Silver badge

Re: 100 Gigawatts per year???

100 gigawatts of what though? A quick google indicates that most of that 429 GW of new capacity added in China is wind and solar. Another quick google and some basic math shows that wind and solar in China are running at about a 17% capacity factor. That is, they actually output only the equivalent of 17% of their nameplate capacity when you take into account that they can't run at full capacity all the time (this is pretty normal). This will compare to say a well run nuclear power plant that can run at 95% capacity factor over multiple years.

Let's put it another way, 100 GW of wind and solar is only equivalent to 17 GW of nuclear.

So if they are talking about adding 100 GW of wind and solar capacity in the US, then that amounts to (assuming similar numbers as in China) of 17 nuclear power plants per year. That's still a lot, but nowhere near the same problem.

Of course an electric utility would would be taking on a huge risk to build new generating capacity just to feed the AI bubble in the US as it is quite likely that this capacity would never be used once the bubble pops. If they can bring old plants back on line that is a safer option as it is likely those plants will in fact never be required to operate.

AI eats leisure time, makes employees work more, study finds

thames Silver badge

Re: Don't get this study

Or possibly AI has made people less productive and they have to work more hours to get the same amount of useful work done.

Amazon spills plan to nuke Washington...with X-Energy mini-reactors

thames Silver badge

Re: More Micro than Small

Natural uranium pressurized heavy water reactors were built in Canada with 20MW and 200MW electrical output, well within the micro and small reactor size range. The next generation of reactor after those were in the 520 MW size, which is not much bigger than Rolls Royce's new SMR. India's early nuclear power plants (built with Canadian technology and assistance) were 220MW, well within the standard SMR definition in terms of size.

Later reactors were built larger mainly because of a desire to gain economies of scale from increasing sizes, much like with every other commercial reactor line.

There is no technical reason preventing natural uranium SMRs from being built. There is a 300MW CANDU SMR design, but the company has decided to focus its engineering man-hours on finishing off their 1000 MW Monark design as they believe there is a more immediate market among customers they have talked to about it for that one. The latter uses modular construction techniques which the company learned from building reactors in Asia and it offers the same sort of rapid construction advantages that SMRs do in a size which also offers economies of scale. They said they would go back to the SMR design later in order to have something to offer to countries or provinces which are too small to be able to add a large reactor onto their grid.

There are more CANDU reactors in the 700 MW size range which are about to be built in Romania (they already operate some) to replace coal fired plants, so there is still an active market for larger reactor designs.

thames Silver badge

The SMRs are under construction right now in Canada just east of Toronto and are being built by a utility with extensive experience in building and operating nuclear reactors. They're based on well proven conventional reactor designs and use standard fuel. There's no need to go to the extent of using exotic technologies in order to build an SMR.

thames Silver badge

Re: More Micro than Small

The Germans were the pioneers of pebble bed reactors and built several of them starting in the early 1960s and continuing on into the 1990s. They used several types of fuel, including a form of the TRISO fuel X-Energy and Kairos intend to use.

They had lots of problems with them and eventually gave up on the idea. The technology then went to South Africa, who also eventually gave up, and China, who are currently running a few, including some SMRs.

X-Energy's design is based on technology from South Africa, whose engineering team was hired by them after South Africa gave up on the idea.

The Chinese started up a pair of pebble bed reactors a couple of years ago feeding a common 200MW turbine. This is the SMR that most people refer to when they talk about SMRs in China. Not a lot of information has come out about how successful it has been so its hard to comment on that one.

Pebble bed technology turned out to be one of those things that was simple in concept but turned out to be very difficult in practice. They are possible and do work, but their advantages over water based reactors have been greatly exaggerated in practice.

thames Silver badge

Re: More Micro than Small

The reactors in the story do not use bomb grade material. The US are diluting some of their stockpile of military bomb grade material down to come up with fuel for a few experimental reactors.

Under international treaty, there is "civil" and "military" uranium. Civil uranium is anything enriched to less than 20% U-235 (natural uranium is 0.7%). The reactors under discussion in this story require uranium enriched to 19.75% specifically in order to stay under the 20% threshold and to be classified as civil uranium. Bombs are made using uranium enriched to somewhere greater than 90% (exact details are a bit hard to come by). There's a big gap between the two. Most commercial power reactors run on 3% to 5% enrichment, or even natural uranium (0.7%) and the existing fuel supply chain is set up to provide that. You don't actually even need any enrichment if you have a good enough moderator and careful engineering design.

Plutonium also comes in civil and military forms, based on its ratio of isotopes. If the isotopes which can be used to make a bomb are below a set threshold, it is classified as civil plutonium. Above that level and it is military plutonium. Plutonium as it comes out of a normal power reactor in spent fuel is civil plutonium, and it can (and is in some countries) be separated out, mixed with recovered uranium, and used again as reactor fuel. Military plutonium has to be made by very carefully controlling the time the fuel spends in the reactor in order to generate the military isotopes while preventing the build up of non-bomb isotopes. This is done in special reactors which allow this sort of special handling.

thames Silver badge

More Micro than Small

These 50 to 80 MW reactors are more MMR (Micro Modular Reactors) than SMRs. SMRs that are actually finding customers and getting built are simply scaled down and simplified versions of normal commercial reactors and use standard commercial nuclear fuel.

X-Energy, is a gas cooled pebble bed reactor (the fuel is in small spheres in a big bin) and the fuel is enriched to 20%. This compares to 3% to 5% for normal commercial enriched fuel, or 0.7% for natural uranium reactors (which are efficient enough that they don't need enrichment).

Kairos Power uses similar fuel to X-Energy, but is cooled with molten salt rather than helium gas.

These MMRs using exotic fuel suffer the problem that they don't have a secure source of fuel, and so far the US are scrabbling under the seat cushions looking for surplus bomb grade material they can dilute down to just under 20% to supply some fuel for experimental reactors. When it is eventually available, it is not likely to be cheap.

So far the US are at the stage of awarding contracts to companies to build enrichment facilities to make this fuel, but there is no domestic supply and these companies have been counting in importing some from Russia.

This special fuel is simply not available on a scale which can supply anything other than demonstration projects.

On top of all this, these MMRs have questionable economics except for things like replacing diesels for powering remote mines and communities.

On the other hand, SMRs in the 300MW range (or nearly 500 MW in the case of RR) use standard commercial fuel which they can buy from existing suppliers.

I find it difficult to be believe that Amazon is serious about this, except as a form of green washing.

Page: