* Posts by thames

1097 publicly visible posts • joined 4 Sep 2014

Page:

Data breach reveals distressing info: People who order pineapple on pizza

thames

Re: food abomination

I saw chicken tikka masala pizza in the freezer section of a supermarket in Canada a couple of days ago. Then again, pineapple on pizza originated in Canada in a place not too far from where that supermarket is located.

In case any Indian intelligence operatives in Canada are reading this, I will hastily add that I've never eaten either so don't murder me.

GNU turns 40: Stallman's baby still not ready for prime time, but hey, there's cake

thames

Minix 3

El Reg said: "A more successful example – but also still rather incomplete – is Minix 3 ..."

The public server for Minix 3 has been more or less dead since 2016 or 2017 and no apparent development since then. There has been talk by verious people about reviving it, but it doesn't seem to come to much.

US AGs: We need law to purge the web of AI-drawn child sex abuse material

thames

Some wrong assumptions are being made

Deepfakes and AI generated images are two completely different things. There should be no need to use live model child pornography to produce AI generated child pornography.

All the creators should need is images of pornography models who are of legal age but have slender builds, and non-pornographic images of children. They could then use generative AI technology to blend the two together.

I suspect that any law based on the premise that AI generated pornography depends on live model child pornography would fail in court if the defendant kept a good record of what data was used to train the model and could show that no live model child pornography was involved.

I suspect that the US will have to address the issue by either having a law that says that if it looks like child pornography, then it's child pornography even if no children were involved (which is what some countries do), or else deal with the issue of what constitutes a derivative work when AI is involved together with some privacy laws that have some real teeth and which can prevent images of children being used for unauthorized purposes. Then add on top of that a requirement for all image related AI models to keep a record of what training material they used.

In the end though, as AI oriented hardware acceleration becomes more mainstream I suspect that attempts to prevent illegal uses of it will become futile.

Microsoft teases Python scripting in Excel

thames

Pandas and Anaconda

Given the repeated references to Pandas and Anaconda, I suspect that what this is really about is using Excel as a front end to Pandas (a major Python data analysis library).

Pandas is so important and so widely used in certain fields of application (data science, machine learning) that people are learning Python just in order to be able to use it. People who had previously used Excel for certain small to medium size tasks are dropping it in favour of Pandas when Excel starts to run out of steam on really big tasks. The data and calculations will reside in Pandas, while Excel is used for data entry and reporting.

Given this, the way that Microsoft can respond is to build Pandas access into Excel. In order to do this they need to add Python scripting into Excel. They then run the calculations in the cloud because the really lucrative (for Microsoft) applications will be ones which are too big to run on a PC.

I suspect they will eventually offer tiered accounts, with prices escalating with cloud instance size.

AVX10: The benefits of AVX-512 without all the baggage

thames

Re: flags

My software is an SIMD library. The problem is lack of published model specific data on the actual performance of individual SIMD instructions so this can't be solved by writing a library. Actual performance doesn't match up with theoretical performance. Sometimes the SSE4.2 version is faster, and sometimes the AVX version is faster. Sometimes the SIMD version is no faster than the non-SIMD version, or only marginally faster. This is CPU model specific behaviour, not something you can just make assumptions about by reading the vendor's published instruction documents.

The only solution is to benchmark each instruction with each integer and floating point data type on actual hardware for each CPU model and I can't afford to buy (and have no room for) every CPU model ever put out by Intel and AMD. The vendors don't publish model specific benchmarks and there is no authoritative third party published data that I am aware of which gives the answer to this problem.

Now multiply this through all the different x86 SIMD systems including mmx, sse, sse2, sse3, ssse3, sse4a, sse4.1, sse4.2, avx, avx2, and avx512. AVX512 itself has a dizzying array of different avx512 subsets as Intel attempted to segment the market in order to extract the maximum revenue for each chip model.

When you look at actual installed base of different CPU models, the only realistic course is to check for sse4.2 and use it if present, and if not, to fall back to a non-SIMD algorithm.

ARM is a completely different story. They have a simple and consistent SIMD instruction set instead of the absolute train wreck on x86.

thames

Re: flags

x86-64 SIMD is an utter shambles. There are so many different versions an variations in different CPU models.

I had a system which used SSE4,2. I looked at adding AVX support. I therefore spent time coding up AVX versions of the software.

When I bench-marked them though, I found that AVX was actually slower than SSE4.2 despite the theoretical advantages. A bit of googling turned up the information that this was a known problem, but was CPU model dependent.

Given the impracticality of implementing and bench-marking the AVX software on every model of x86 CPU out there, I binned the notion of using AVX and went back to SSE4.2 At least I know that it works.

Chinese media teases imminent exposé of seismic US spying scheme

thames

Re: I'm very dubious about this

Err, there's an explanation right in the Global Times article which El Reg references in the second paragraph of their story.

Here's one example of how seismic data can be used for nefarious purposes:

"By obtaining relevant data from seismic monitoring centers, hackers can deduce the underground structure and lithology of a certain area," the expert said. "For example, it can be inferred whether there is a large underground cavity, and thus whether it might be a military base or command post."

Here's another:

Seismic intensity data is closely related to national security, for instance, some military defense facilities need to take into account factors such as seismic intensity, experts said.

So hypothetically you hack into a country's seismic monitoring system and wait for an earthquake. You then analyze the data to get an idea of the regional geology and how it relates to large underground command posts or aircraft shelters (whose location you find by other means) which you then use to decide how big of a nuclear bomb to use on that target. Then when you decide to start WWIII you have a slightly better idea of what to nuke and how.

What is it you imagine that the US do when they spend many, many, billions of dollars per year on their offensive "cyber" operations? The NSA (etc.) spies don't all spend their entire day playing video games and posting on Facebook. They spend at least some of it hacking into all and sundry, as the Germans (and others) have found out to their cost as one very well known example.

Oracle, SUSE and others caught up in RHEL drama hit back with OpenELA

thames

Re: I'm against this, but wait...

Red Hat are a company whom you can pay money to in order to have them provide support to you.

Debian are a community free software distribution. There is no Debian company selling support services. There are however companies distributing derivatives of Debian and providing support for those, such as Canonical (Ubuntu).

The key difference between RHEL and Debian is that RHEL's "upstream" (Fedora and Centos) is controlled by Red Hat who can dictate policy to them while there is no commercial company controlling Debian.

Debian are known for stability and being cautious about releasing new features in the stable release series. The majority of Linux distros are Debian derivatives (revenue is another story).

If OpenELA want to really make a mark in the Linux distro world, they could look at releasing an OpenELA distro with binaries, rather than just being a set of git repositories. They could then be line Centos was. Commercial support would still come from OpenELA members, based on their own binaries. OpenELA could then be a "Debian" equivalent.

Perhaps they have this in mind as their eventual goal. We'll have to wait and see.

thames

Re: founded to create continuity for all Enterprise Linux downstream distributions

Red Hat have already restricted RHEL source access to Oracle, Suse, etc. This is why the latter have formed this organization.

At this point it's still not clear what it was that Red Hat were hoping to accomplish. Was this a reaction to loss of market share to Alma, Rocky, Oracle, and the like? Or do they have plans for big price increases and want to make it a bit more difficult for existing customers to jump ship (or threaten to jump ship) to the alternatives?

Somehow this has to be connected to money, but I'm still not sure just how.

Overall though this Red Hat plan has to be related to distros which are exact clones of specific RHEL releases and not ones which are Fedora or Centos derivatives, because the latter haven't changed policies.

Microsoft: Codesys PLC bugs could be exploited to 'shut down power plants'

thames

Re: Codesys

Almost nobody doing work on actual PLCs (I can't say what it is for Codesys) cares about anything other than ladder or instruction list (IL, a sort of assembly language-like code).

State machine, sequential function chart (SFC or grafcet), and other flow chart type systems are usually add-on code generators which spit out unreadable and horrifically inefficient ladder or IL. Most people programming PLCs have never seen them and most probably don't know what they are.

I've used state machines and grafcet as design and analysis tools, but always manually translated that into ladder for actual implementation. State machines are good for visualizing some problems and SFC for others, but real world systems generally can't be done properly using either exclusively. You generally need a mixture of ladder (or IL) in with the flow chart type language. If the IDE system can't do mixed programming then the flow chart is useless in practical terms and you are better off sticking with straight ladder.

As for your amateur PLC botherer, knowing about the scan rate limit on a PLC is basic beginner knowledge. On most PLCs though the PLC will fault (shut down) if the scan rate limit is ever exceeded. You can then use the programming software to read the fault error message out of memory to see what the cause was. Usually though you'll run out of memory before you can write enough code to exceed the scan rate.

Codesys though isn't actually a PLC, it's an IDE that lets you write embedded software in a manner similar to how you would do it on a PLC. I don't know how it implements concepts like scan rate. Quite possibly it does it rather badly.

thames

Codesys

Codesys is not in itself a PLC. Rather, it's software which can be used to create programs which are written in PLC style and which are then compiled and run on an embedded system. It's mainly used by a few smaller companies in specialized equipment.

The mainstream PLC vendors all have their own proprietary systems which they sell as complete hardware plus pre-installed run-time software combinations.

Most people who work with PLCs will probably never run into a Codesys at anytime during their career, and I expect that a majority will never have even heard of it. As a consequence of this, any problems created by this will be fairly limited.

As for IEC 61131, it's a farce. It was a vendor driven process and they simply declared that anything and everything was standard, so it provides no cross-system compatibility or portability. Porting a PLC program means a complete re-write.

Shifting to two-factor auth is hard to do. GitHub recommends the long game

thames

oathtool

The deadline is end of 2023 now? Last year the absolute final deadline was end of 2022.

I made some notes on this last year, and based on that their preferred 2FA method is TOTP.

There is a command line program called "oathtool" which will run on my PC. I can integrate that into my git bash scripts and also add a simple GUI front end via zenity to use for web log-ins.

For example:

oathtool --totp 01234567

Oathtool is in the Ubuntu repositories, and the same probably goes for most mainstream distros. So far as I am aware, the phone app just generates a TOTP key just like oathtool, so there's no need for me to use a phone or other separate hardware.

Linux has nearly half of the desktop OS Linux market

thames

Re: If ChromeOS is Linux...

You had problems getting it to install? It's fussy about hardware? Now that's real, old style Linux right there. I suddenly feel nostalgic about my early days with Linux.

AlmaLinux project climbs down from being a one-to-one RHEL clone

thames

Decisions, decisions

I have serious doubts that Rocky Linux will reliably be able to continue with an exact clone of RHEL. Red Hat can just keep tweaking their T&Cs to exclude them, leaving Rocky scrambling to try to find another loop hole.

This fragments the "RHEL compatible" market, including for Red Hat itself. I tested my (free/open source) software on Alma. Will it run OK on the current release of RHEL? Who knows? I'm certainly not going to sign a contract with Red Hat in order to find out.

Without the derivatives being exact clones, it raises the question of just what is the raison de'etre for any of the RHEL clones? They can say they are testing applications, but they won't have copies of the proprietary enterprise software which is tying customers to Red Hat, so how will they do that?

As I've mentioned before, I have several minor free/open source projects which I maintain. I've been testing them on Centos, and later on Alma Linux as substitutes for RHEL. If there are no exact RHEL clones, then should I even bother testing on RHEL equivalents or should I just delete that VM and save some time and disk space? I test on various other LInux distros (and BSD), so perhaps I should leave it at that and tell any RHEL users that there's nothing I can do if they have any problems.

I'm sure that Red Hat aren't going to lose any sleep over whatever decision that I make (I doubt they know that I even exist), but it does make me wonder how many other developers are in the same boat and will make the same decision.

Microsoft and GitHub are still trying to derail Copilot code copyright legal fight

thames

Re: Copying or transformative?

The issue will likely come when the project you are working on resembles only one or a very few original works that the model was trained on. The model will likely then spit out suggestions that are very close to the original work.

The biggest problem of course is that due to the way the system works, you will have no idea when this is happening.

To take your art analogy further, imagine a very unimaginative art student whose art education consisted of being shown a selection of paintings in a museum. Now suppose you tell this budding artist to produce a new landscape containing windmills. However, that art student has never seen an actual windmill himself and had only ever seen two museum paintings containing windmills. The resulting new painting will likely contain recognizable copying from the originals. This will be the issue with software.

Going by the way that the copyright system works, I would not be surprised if Microsoft are found to be not liable, but that that the people who use their Copilot service are instead found to be shouldering the full liability on themselves for any resulting copyright violations. This is because Microsoft will be feeding customers small fragments at a time while the customer is the one assembling them into an infringing work.

Software copyright law is based heavily on copyright for books, movies, and other entertainment and educational media. Eventually generative AI will get good enough for it to be applied to that industry (it's already being used for stock images, with resulting controversy there as well). So imagine feeding 50 years of American TV situation comedies into a model and using them to produce new ones based on broad scripts without the original studios collecting royalties on them. It's not going to be allowed to happen and laws will be changed if necessary to ensure that it doesn't. Software will be affected as a byproduct of this as well.

thames
Boffin

Exact cloning is not required for it to be a copyright violation.

"arguing that generating similar code isn't the same as reproducing it verbatim"

US copyright law doesn't require exact copying for it to be a violation of copyright. For software there is a process of evaluation involving distilling the code down to its essentials and comparing it that way. That way you can't for example simply change the variable names and formatting to avoid copyright. If that were allowed then you wouldn't need AI to get around copyright law.

The real issue will likely come down to whether each bit of copying had enough code copied for it to be considered significant enough.

A common analogy is that you can't take the latest Harry Potter novel and simply change the character and place names, add a few scenes from another book, and then publish it as your own work. It's still considered to be a copy even though it's not identical.

Google accused of ripping off advertisers with video ads no one saw. Now, the expert view

thames

The dispute appears to be that someone is claiming that significant numbers of ad slots that were sold to advertisers as "in stream ads" (ads shown as part of a Youtube video) are actually being shown as "out stream ads" (ads shown as small pop-ups with the sound muted on text content sites). The former is considered to be more desirable means of placing ads than the latter.

I suspect that what may be happening is that Google are selling ad placement packages which include a mixture of both (a quick reading of related material suggests that this is what ad consultants were recommending their clients to buy). I suspect that Google's ad salesmen were promising their customers butterflies and rainbows of what is "possible" but the fine print on the contract was vague and waffley on what would actually be delivered. Customers were therefore found themselves underwhelmed with the difference between what was promised by the marketers and what was actually delivered.

If this is so, then it looks like the ad men at the customers were getting a taste of their own medicine and I don't have much sympathy for them.

Personally I think that in stream ads (pop-up video ads that somehow get around standard browser auto-play blocking) are horrific and only bottom feeder low quality sites have them. Admen have been buying out stream ads because if provides more potential sites on which video ads can be shown, and they think that video ads are "better" than normal image/text ads. However, it sounds like out stream ads are a flop in terms of effectiveness and advertising directors who bought them are losing their bonuses and are looking for someone to blame.

Canada plans brain drain of H-1B visa holders, with no-job, no-worries work permits

thames

Re: For once, Trudeau's government has made an actual smart move.

This is not an equivalent to the US H1B visa. This is a one time block of 10,000 regular open 3 year work permits being opportunistically targeted at a particular pool of people. They can work for almost any employer anywhere in Canada and their spouses and children can get work and study permits as well.

Canada has loads of immigrants from India, so we are quite familiar with evaluating their qualifications.

thames

Re: Short term visas

This is completely different from a US H1B visa. This is simply a one time block of 10,000 work permits which are being targeted at certain US H1B visa holders who recently lost their jobs in the US or are concerned about losing their jobs. It's not an equivalent to the US H1B visa.

Under the terms of the Canadian program their spouses and children can also get temporary residency visas and work and study permits.

This particular work permit is a three year open visa and you can change employers. Once you get a work permit in Canada and have regular employment, applying for permanent residency is fairly straightforward. Once you have permanent residency you can get citizenship in 3 years.

People who are successful under this work permit will be encouraged to remain in Canada with their families as permanent residents or citizens. However, Canada is looking for people with education, skills, and qualifications, not just letting anybody in. Immigration is primarily intended to be used as a tool to support economic development policy. The Canadian immigration system has a record of conducting strategic "raids" on particular pools of labour on an opportunistic basis, so this one isn't a big surprise.

Rocky Linux claims to have found 'path forward' from CentOS source purge

thames

Re: "Certified"

Some of the people inconvenienced by this are people with Free Software projects who test their software before release. I have several minor projects which I run through automated tests on a variety of distros (and BSD) before each release.

Red Hat have a developer program which could theoretically get me a free copy for testing, but I'm not going to assume the legal liabilities which come with signing the licenses. These licenses include terms which do things like agreeing to subjecting myself to the jurisdiction of a foreign court and agreeing to reimburse Red Hat for the cost of license audits, etc. No other distro that my projects support require me to do anything similar.

I'm currently using AlmaLinux. If they can't come up with a reasonable work-around, then I will simply drop Red Hat clones as a testing target.

Centos Stream and Fedora aren't viable substitutes as they are not the same as RHEL and the whole point of testing on a specific target (as opposed to "but it works on my PC!") is to duplicate a user's operating conditions.

I don't imagine that Red Hat will care one way or the other about me in particular, but other people in the same boat will likely be making the same decision as well.

Red Hat strikes a crushing blow against RHEL downstreams

thames

So sort of like the mainframe was at one time? How very IBM. How's IB-Hat's mainframe business doing these days by the way?

thames
Thumb Down

Not very developer friendly

I have a few minor open source projects in several languages which I distribute via the usual avenues for such things. Before I release a new version I run an extensive automated test suit run on roughly a dozen different distros (including BSD) on x86 and ARM.

One of these test platforms has up until now been a Red Hat derivative. At first it was Centos, and later AlmaLinux.

The point of testing on Centos or AlmaLinux was to look for problems that people using RHEL may have and fix them before release. If AlmaLinux have to stop doing updates and get out of sync with RHEL then there's no point to running tests on it anymore. Fedora isn't a substitute since it is not the same thing as the current RHEL version.

I'm not going to either sign up for a RHEL license or a RHEL developers' license. All signing a developer agreement would do is give Red Hat greater ability to sue me for some incomprehensible reason in a foreign court. Why would I want to do that?

If AlmaLinux can't work around the current problems then I'll simply drop testing on RHEL derivatives. I don't imagine that Red Hat / IBM will even notice, but over time more developers may come to the same conclusion that I have and Red Hat may find that their platform gradually becomes the less reliable one on which to run actual applications.

For now though I'll wait to see what AlmaLinux can come up with.

This malicious PyPI package mixed source and compiled code to dodge detection

thames

Re: Why have pyc files in a package anyway?

The Python "interpreter" automatically compiles each source file as it is imported and then caches the compiled form as a ".pyc" file. This means that the second time it is executed it can skip the compile step and import the byte code binary directly. This can speed up start up time significantly. While the Python compiler is very fast, on large programs it can make a perceptible difference.

Because of this you don't actually need the ".py" file on imported modules if the ".pyc" file is already present.

This isn't something unique to Python, as many other languages have used a similar strategy.

Some people use this as a very weak form of copy protection so they can distribute Python programs to customers without giving them source code. That isn't what it was orignally intended for, it's just a side effect of having a faster start up.

However, this does mean that there is a use case for having ".pyc" files rather than source in a package. This in turn means that having the standard installation tools exclude ".pyc" files would break at least some existing software out there.

The solution is to simply have the code analysis tools disassemble the ".pyc" files and analyze those (the output is like a form of assembly language). A disassembler comes as part of the Python standard library.

Python Package Index had one person on-call to hold back weekend malware rush

thames

Re: Difference between PyPi and NPM?

It's apparently a typosquatting attack. The malicious author creates a new package which has a name which is very similar to a commonly used legitimate package.

When a developer who wants to install a legitimate package misspells the package name he may get the malicious one instead. It then gets installed and used in a project which the developer is working on.

The target seems to be web developers. When the victim tests his project with his web browser, the malicious package injects some JavaScript which looks for bitcoin addresses in his browser's clipboard. It then changes the address to the malicious author's own so that any bitcoin transactions go to the attacker's own wallet.

The attackers are using automated scripts to creat hundreds of new package names which are very similar to legitimate ones, and counting on human error to occasionally get picked due to a typo.

This is a problem inherent to any repository which is not manually curated, regardless of the language.

The most straightforward solution is to only install packages from your distro's repos instead of directly from PyPI.

Double BSD birthday bash beckons – or triple, if you count MidnightBSD 3.0

thames

OpenBSD 7.3 won't install in VIrtualBox

I installed both FreeBSD and OpenBSD in VMs on Tuesday. FreeBSD installed without any problems.

OpenBSD 7.3 however crashes during the installation process. I eventually ended up installing 7.2 and then did an upgrade to 7.3, which went fine.

I just repeated the attempt with a new download a few minutes ago before making this post. It's not clear what the problem is. The VM is VirtualBox 6.1.38 on Ubuntu 22.04.

I install all the major Linux distros and BSDs in VMs for software testing, and this is the only one which is giving me this sort of problem. Aside from that, it worked fine once it was installed and running.

One thing that I did notice is that in OpenBSD the Python version has been upgraded to 3.10 from 3.9, while with FreeBSD it remained unchanged at 3.9. This isn't a problem (I welcome it in fact), but it's worth noting that a change like this has been made in a point release.

TikTok: Is this really a national security scare or is something else going on?

thames

Re: TikTok is a smokescreen

I've just skimmed over the actual proposed legislation, and TikTok isn't even mentioned. What it is is a law to allow the US president to arbitrarily ban pretty much anything involved in communications if he doesn't happen to like it. The VPN industry are apparently particularity worried.

Covered are:

  • Any software, hardware, or other product which connects to any LAN, WAN, or any other network.
  • Internet hosting services, cloud-based services, managed services, content delivery networks.
  • The following is a direct quote: "machine learning, predictive analytics, and data science products and services, including those involving the provision of services to assist a party utilize, manage, or maintain open-source software;"
  • Modems, home networking kit, Internet or network enabled sensors, web cams, etc.
  • Drones of any sort.
  • Desktop applications, mobile applications, games, payment systems, "web-based applications" (whatever those are interpreted to mean).
  • Anything related to AI, quantum cryptography or computing, "biotechnology", "autonomous systems", "e-commerce technology" (including on-line retail, internet enabled logistics, etc.).

In other words, it covers pretty much everything in the "tech" business. Singling out TikTok in particular is nothing but a red herring meant to divert attention form what is actually going on.

The bit covering "open source software" is particularly troublesome. People running open source projects may have to seriously think about moving their projects outside of US influence.

thames

Re: "Nations are one by one banning it from government-owned devices"

If there are genuine security concerns then the only apps which should be present on government owned devices are those which have gone through an official security review and been approved as valid and necessary for the device user to perform his or her job.

And for the 95 per cent of the world who aren't the US, Facebook, Twitter, and the like are equally as problematic as TikTok and for the same reasons.

Putting out a blacklist is pointless, as anyone with any knowledge of the subject would know. Lots of apps rely on third party libraries which have data collection features built into them, it's part of their business model. This data is then sold to data brokers around the world with few or no controls over what is done with the data or who it is sold on to. There are so many apps in existence that blacklisting is an exercise in futility.

Don't allow anything on government devices which has not gone through a security review and whose data is hosted outside of one's own country. The same rules should be applied to any business which handles matters which have security implications.

I don't know of any business which allows individual users to install whatever they want on government owned PCs, so why should phones be any different?

The same thinking should be applied to the OEM and carrier crapware that gets pre-loaded onto phones as well. There should be no pre-loaded apps beyond those which have been approved.

Chinese web giant Baidu backs RISC-V for the datacenter

thames

I suspect that Baidu have a list of things they are interested in using RISC-V for, but are not making solid commitments to any in particular on any specific time line until they've done some researching and testing.

Overall though, I expect them to shift to RISC-V over a period of years. Anything else is too much of a risk given the current international environment.

We can probably expect to see India going down the same road for the same reasons, just a few years behind though.

Critical infrastructure gear is full of flaws, but hey, at least it's certified

thames
Boffin

Let's look at a few CVEs

I had a look at the actual CVEs for kit that I'm familiar with and with regards to that kit I'm not too worried with what I saw.

I'll take Omron as an example because their kit is the most mainstream of that listed. They had three CVEs listed against them. One CVE was for passwords saved in memory in plain text. However, in decades of doing PLC programming I've never seen the password feature used even once on any PLC. It's something which a few customers want, but it just isn't used by most. Not all PLCs even have a password feature. In practical terms there's nothing to stop someone from simply resetting the PLC to wipe the memory and loading their own program anyway.

The other two CVEs basically amount to the user programs are not cryptographically signed. That's no different from the fact that I can load programs on pretty much any PC without those being cryptographically signed either. I can write a bash script and run it without it being cryptographically signed. Accountants can write spreadsheets and run them without them being cryptographically signed. Saying that PLC programs should be cryptographically signed is really stretching things a bit.

The main real world security vulnerabilities in industrial control systems have been bog standard Windows viruses, root kits, and the like in PC systems running SCADA software.

My opinion of security for industrial equipment is the OEMs are not security specialists and they won't get it right so they're better off not trying. Also the in service lifetimes of their kit can be measured in decades, which is far beyond any reasonable support period.

More realistically, isolate industrial kit from any remote connections. If you need a remote connection, then use IT grade kit and software as a front end to tunnel communications over and get the IT department to support it as they should know what they're doing, unlike say the electricians and technicians who are often the ones programming PLCs.

Expecting non-security specialists to get security right 100 per cent of the time is a policy that can only end in tears.

Havana Syndrome definitely (maybe) not caused by brain-scrambling energy weapons

thames

Already solved in a Canadian investigation made at the time in question

Canada investigated the problem and fairly quickly found it was due to excessive use of pesticides during the zika virus outbreak in the Caribbean at the time. A US contractor from Florida was used to fumigate the embassies and staff housing with organo-phosphate pesticides.

Due to the panic over zika at the time (the virus was causing serious birth defects), the fumigation was carried out much more frequently than normally recommended. The result of this was that people were exposed to toxic levels of pesticides.

Blood samples of Canadian diplomats found above normal levels of organo-phosphate pesticides. The symptoms associated with this are consistent with these associated with so-called "Havana Syndrome", including hearing sounds that aren't there and the rest. Examination of the patients also found nervous system damage consistent with pesticide poisoning. Organo-phosphates affect the nervous system (some types of organo-phosphates are used as military nerve gas), so effects on the brain are entirely to be expected.

The US government were aware of the Canadian medical investigation but chose to ignore it. Hypothetically this may have been due to concerns about the finger of blame (and lawsuits) coming back on the US officials who approved the use of excessive amounts of pesticides.

By order of Canonical: Official Ubuntu flavors must stop including Flatpak by default

thames

Re: Who cares?

As I mentioned in a post above, the issue is with respect to the terms and conditions for using the "Ubuntu" trademark and financial support in the form of free services. If you want to use the trademark and get official help, then you need to follow the guidelines.

thames

Re: Software Freedom

Derivatives are free to do what they want with the software. If they want to use the "Ubuntu" trademark or get free infrastructure and other benefits though, then they have to stick to Ubuntu policies with regards to official "flavours".

Other distros do take the Ubuntu software and don't follow Ubuntu guidelines, but they don't get to call themselves "Ubuntu".

Try starting your own distro and calling it "Debian" without Debian's permission and you may find that Debian the organization may have a sense of humour failure. The same goes for Red Hat or Suse.

Ubuntu are probably the least restrictive of the major distros when it comes to derivatives.

thames

Re: future of apt on Ubuntu?

Aside from Firefox, Snap seems to mainly have replaced PPAs. It used to be that if you wanted the newer version of some obscure package you needed to find some potentially dodgy PPA and install that.

Now that's done through Snap, there's an official Snap store, and Snap packages are confined so they have limited access to resources.

It's useful for certain things, such as allowing one Firefox package to run on all Ubuntu versions instead of rebuilding it for each version. It's also good for obscure packages that that need to be kept up to date.

On the other hand, it's probably not going to replace the bulk of Deb packages, as there's no reason to do so. I have a handful of Snap packages installed (e.g. the Raspberry Pi imager), and I normally look for a Deb first before falling back to a Snap. In some cases without the Snap I would probably have to build from source.

The main targets for Snaps are actually applications from propriety vendors who traditionally had horrifically bad packages, and games, as game studios don't want to update their packages for each new release. You could also think of Snap as being an alternative to something like Docker when it comes to server applications.

Flatpack on the other hand seems to be pretty badly thought out. It only does GUI apps, doesn't handle server cases, and package management is pretty poor (as described on the article). Snap is what Flatpack should have been, and the only real reason why Red Hat and friends persist with Flatpack seems to be NIH. There's nothing to stop each distro from setting up their own Snap store the same way they do their Deb/RPM repos.

Could RISC-V become a force in high performance computing?

thames

Re: A mixed blessing?

Massive incompatible fragmentation outside of the core instruction set pretty much describes x86 vector support, and that doesn't seem to have hurt it any.

The big question is whether CPUs will be available on low cost hardware equivalent to a Raspberry Pi so that people can test and benchmark their code, or if you have to book time on the HPC system just to compile and test your code. That is what will make the difference.

If you are doing serious vector work you need to use the compiler built-ins / extensions, which are more or less half way between assembly language and C. Good vector algorithms are not necessarily the same as non-vector algorithms, which means you need actual hardware on your desktop for development. This is the real advantage that x86 and (more recently) ARM have, and which RISC-V will need to duplicate.

US and EU looking to create 'critical minerals club' to ensure their own supplies

thames

Re: What about Canada and Mexico?

It's about the electric car market. The US introduced highly protectionist subsidy legislation for the US domestic car market. The goal was to establish US manufacturing sites in the market before other countries got their foot in the door. This was in direct violation of NAFTA rules, and Canada and Mexico threatened retaliation.

Canada then offered the US a face saving formula of "access to critical minerals" in return for watering down the protectionist measures. What exactly that means is anybody's guess, as nobody was seriously talking about export bans to the US (or anyone else) anyway. However, Canada has been using the "critical minerals" phrase (very vaguely defined) in trade talks with a variety of countries (particularly in the Far East). Canada is also very good at finding and exploiting American political weak points, and found a formula that played to American fears about China. Mexico was brought into the plan and the two presented a united front that got the Americans to water down their protectionist measures.

The EU were already unhappy about the new American auto market protectionism and were also talking about retaliation in the form of action through the WTO. This is something which the US would be guaranteed to lose, but would take years to get a final decision on (assuming the US didn't simply ignore it). Once they saw the US reverse course in the face of threatened Canadian and Mexican retaliation, the EU demanded a similar deal. The same "critical minerals" red herring was brought into the discussion for the same reasons.

Biden is as protectionist as Trump ever was, even more so in some ways. This latest venture in the form of the "Inflation Reduction Act" is an absolute disaster for free trade and a major step towards state control of industry. International auto trade would be a major casualty of it if it isn't de-fanged.

Any talk about a "critical minerals club" controlled by the US (or the US and EU) is simply talk. None of the big exporters have any incentive to sell to anyone other than the highest bidder and really aren't interested in being used as cannon fodder by the great powers.

Oh, WoW: Chinese gamers to be cut off from Blizzard games next week

thames

Re: Cause and effect...

Blizzard wouldn't leave the contract renewal negotiations for the last minute, so they will likely have been negotiating for at least 6 months. The statement from NetEase in November would seem to indicate that talks were going nowhere by then.

I would imagine that it's all about money. NetEase have probably been getting bent over a barrel by Blizzard and want more money. Blizzard probably want to pay them less.

Given that there's been no announcement yet about who is going to replace NetEase, it sounds like Blizzard has had even less luck at finding a replacement who are willing to accept the terms that are on offer. If it was NetEase who were the problem, then Blizzard would have found someone else by now.

Should open source sniff the geopolitical wind and ban itself in China and Russia?

thames

Re: Absurd

The article is based on the assumption that the US government will dictate terms to the rest of the world and everyone will fall in line. The word isn't like that. It's worth remembering by the way that the RISC-V organization moved out of the US specifically to get away from this sort of thing.

Once politically based license restrictions start there would be no end to them. Loads of people will start putting in license terms that say things like "cannot be used by anyone who does business with the US military". That's why this sort of thing would backfire on the US.

If you think that's a bit over the top, if you follow the links associated with the "advocate" promoting this idea, you will find that the licenses being promoted as being "ethical" do exactly that.

The Tornado Cash example was a complete red herring, as it's the Tornado Cash company who were being blacklisted for money laundering, not the source code. The Github repo is still on line, but the company's access to it has been frozen. There's nothing to stop someone else from forking the code and continuing on with it.

There's a good reason why proper Free Software has no restrictions on fields of endeavour. Once you bring politics into software licenses the entire field would Balkanise into incompatible licenses and the entire field of software would fall into the hands of a few big proprietary vendors who would base their business on having huge teams of lawyers who can navigate the licensing issues in each country.

Tech supply chains brace for impact as China shifts from zero-COVID to rampant COVID

thames

Not sure the evidence is there

El Reg referenced two studies. The link to one appears to be broken, but the Singapore study titled "Comparative effectiveness of 3 or 4 doses of mRNA and inactivated whole-virus vaccines against COVID-19 infection, hospitalization and severe outcomes among elderly in Singapore" had this to say:

"As BNT162b2 and mRNA-1273 were recommended over CoronaVac and BBIBP-CorV in Singapore, numbers of severe disease among individuals who received four doses of inactivated whole-virus vaccines or mixed vaccine type were too small for meaningful analysis." So, we may not want to draw too many conclusions from that study.

Another comparative study which is widely referenced is one in Brazil titled "Effectiveness of CoronaVac, ChAdOx1 nCoV-19, BNT162b2, and Ad26.COV2.S among individuals with previous SARS-CoV-2 infection in Brazil: a test-negative, case-control study"

June 01, 2022

Here's the link to the article in The Lancet (a major UK medical journal).

https://www.thelancet.com/journals/laninf/article/PIIS1473-3099(22)00140-2/fulltext

It had this to say:

"Effectiveness against hospitalisation or death 14 or more days from vaccine series completion was 81·3% (75·3–85·8) for CoronaVac, 89·9% (83·5–93·8) for ChAdOx1 nCoV-19, 57·7% (−2·6 to 82·5) for Ad26.COV2.S, and 89·7% (54·3–97·7) for BNT162b2."

The vaccines referenced are CoronaVac, Oxford AstraZeneca, Johnson & Johnson, and BioNTech/Pfizer respectively.

In other words, the Johnson & Johnson is not very effective, but CoronaVac seems to be only marginally less effective than Oxford-AstraZeneca or BioNTech-Pfizer in terms of hospitalisation or death.

Efficacy against infection is less impressive, but the same is true for the rest of the vaccines as well.

There are two main components in your immune system which vaccines stimulate. Antibodies prevent infection, but they are short acting (weeks or months at most) and sensitive to changes in variants. T-cells prevent severe hospitalisation or death and are both much longer lasting and far less sensitive to changes in variants.

The big problem in China isn't that their vaccines don't work. The problem is that the older people are the ones who are least likely to have gone out and gotten their jabs and the younger people are the most likely, while in many Western countries it's the other way around.

What this suggests it that there will be plenty of symptomatic infections among the general population, but hospitalisation and severe disease is likely to be mainly in people who are unvaccinated.

Since in China the unvaccinated are mainly the elderly who are much less likely to be part of the working population, the economic effects are at best unclear.

Among the working population there may be plenty of short term work absences due to mild illness, but we saw the same in Western countries a year ago when the omicron wave swept through and that didn't shut down the economies there.

I'm making no predictions here, just pointing out that it's a bit soon to be predicting pandemic induced chaos in the Chinese economy as the evidence isn't there yet.

That doesn't mean to say that there won't be plenty of unfortunate deaths, but the kit you've got on order from China may arrive safely after all.

Bill Gates' nuclear power plant stalled by Russian fuel holdup

thames

Re: Low enrichment?

The name HALEU (high assay low enriched uranium) is the name the US uses for uranium that that has an effective enrichment of 5 to 20 percent.

Just straight low enriched uranium is 5 percent or below. US reactor designs use this.

Natural uranium is less that 1 percent. About 10 percent of the world's reactors are built to use this.

Their present plans to make it involve taking some of their existing stockpile of highly enriched uranium from weapons reactors, reprocessing the fuel, and blending it with low enriched uranium.

There is a finite amount of this available, only enough to do some demonstration projects.

For commercial supply they will have to build more uranium centrifuges, which are apparently years away from coming into service.

thames

Re: Poor choice of fissile material?

The modifications can be as minor as a new fuel bundle design. There are many different fuel compositions incorporating thorium, all with various pros and cons. If you're satisfied with using some thorium in existing heavy water reactors you don't need much change to them. If you want a completely self-sufficient fuel cycle, then the reactor design itself has to be tweaked to optimize it for thorium. There are many other solutions that fall somewhere in between.

As I've said before though, there's currently no economic case for thorium fuel. If uranium gets expensive enough, then that will change.

Canada has done tests with thorium starting the 1950s. Making use of thorium however involves re-processing spent fuel, otherwise it's a waste of time. At present it's cheaper to just use a once-through uranium fuel cycle and store the spent fuel until fuel prices rise enough to make recycling worth while.

The reason the US is so fixated on thorium fuel at this time is because of their concerns about international nuclear proliferation with enriched uranium.

This has never been a concern for countries that don't use enriched uranium in their power reactors. They have looked at thorium purely from an economic standpoint, which at present doesn't justify its use.

Current US reactor designs and their derivatives (e.g. in Japan and elsewhere) are descended from naval power plants (nuclear submarines) where reactors had very tight space constraints.

CANDU and derivatives are descended from the joint Canada-UK nuclear weapons program in WWII. The current CANDUs and derivatives are direct descendants of the first Canada-UK weapons program reactor built during WWII north of Ottawa. This resulted in a completely different development path from nuclear power in the US right from the first criticality experiments onwards.

A lot of the assumptions that people have about nuclear reactors based on US experience simply don't apply in this case. The whole subject is very complicated and a lot of the real problems with thorium revolve around fuel composition, fabrication, and reprocessing. Many of the reactor designs you see promoted by start-ups are based on very complex proprietary fuel designs whose main purpose seems to be to create some potentially extremely lucrative vendor lock-in for refuelling and licensing.

thames

Re: Poor choice of fissile material?

Thorium will work as a fuel in modified CANDU style heavy water reactors, which is what Canada uses (as well as a number of other countries).

However, uranium is currently cheap enough that it's not worthwhile using thorium. India are working on it because they have lots of thorium but not much uranium and want nuclear fuel self sufficiency. Canada has done experiments on it to prove the technology, but the economics don't justify using it as a production fuel.

Thorium can be used as a fuel without any new or exotic technology, there's just been no reason to bother until uranium gets expensive enough. Despite all the hype, thorium is not a magic solution to any problems we actually have at this time.

GCC 13 to support Modula-2: Follow-up to Pascal lives on in FOSS form

thames

Opaque Types

The story missed one of the big features of the language, which was "opaque types". This was a different approach to solving the same problem as object oriented programming was trying to address at about the same time.

The name of an opaque type would be declared in the module interface, but any details of what it actually was would be hidden in the implementation, and so not visible to anything outside of the implementation (interface and implementation were defined parts of each module, and in separate files).

The interface would also declare functions which would take the opaque type as a parameter, and so allow you to manipulate it. Anything outside of the module however couldn't do anything with the type directly.

Overall it was equivalent to an object's attributes and methods, but defined at the module level. It differed from objects however in that there was no internal state to the module, you needed to pass around the opaque types explicitly.

I did several projects using TopSpeed Modula 2. It was probably by far the best compiler, IDE, and debugger available for MS-DOS PCs in that era. I also had their C compiler as well.

If I had any criticism about Modula 2 it was that screen I/O was a lot of work due to the strong typing of the I/O. Every data type had its own output function, and even newline was a separate function. C's printf may be complex and sloppy, but it's also a lot less work to use.

I liked Modula 2 a lot at the time, but I don't think it's going to make a come back. The industry went firmly down the object oriented route and Modula 2's opaque types are an interesting but forgotten side diversion.

CERN, Fermilab particle boffins bet on AlmaLinux for big science

thames

Been using it for almost a year.

I've been using AlmaLinux as a replacement for CentOS for automated software testing for almost a year now, and I've had no problems with it that I can recall. I can recommend it for that purpose without hesitation.

I was motivated to use it by CentOS dropping support of their then latest release. I had to either pick something new or drop Red Hat as a supported distro for my software project. AlmaLinux was a straight drop in replacement for CentOS.

I want to congratulate Lord Raglan and Marshal Saint-Arnaud for their success at Alma, hard fought it no doubt was. With that out of the way Sebastopol should be within their grasp before long. Hurrah!

US ends case against Huawei CFO who holed up in Canada for three years

thames

The view from Canada was a bit different.

As I pointed out in the comments in the comments to the previous story (which El Reg references in this one), the whole story was very badly reported on by the international press, who mainly just reprinted US official spin. The Canadian press attended the actual trials and their reports gave a very different picture.

To summarize a few points, the reason the US charges were constructed as "fraud" was to get around Canadian extradition law ("dual criminality"). Sanctions charges would have been tossed out by the court in Vancouver immediately, as Canada was still part of the European deal with Iran so no sanctions laws were violated in Canada. The US therefore constructed a very convoluted argument that "fraud" was committed because fraud is illegal in Canada, and they knew the judge in Vancouver was not able to take into consideration whether there was a genuine fraud case against Meng, just that fraud was also a crime in Canada and the US was charging her with that.

Meng's lawyers were able to obtain copies of documents directly from HSBC which contradicted versions provided by the US as evidence (they needed to show that they had some sort of evidence). The US versions of the documents turned out to have been edited by the US to remove significant exculpatory evidence. However, the extradition judge didn't consider these documents, as they were evidence of Meng's innocence and Meng's guilt or innocence could play no part in an extradition hearing which was only concerned with whether the right paperwork had been filled out and whether Canadian officials had behaved legally.

Canada is in the process of revising its extradition laws due to systematic abuse of the extradition processes by allies. This was already on the agenda before the Meng case and so had nothing to do with that. France in particular were notorious for extradition cases which turned out to lack substance upon actual trial. The extradition review is now back on the table after having been sidelined by the pandemic.

The Meng extradition case seemed to be on the point of collapse on abuse of process grounds when the US suddenly reversed course and decided they wanted a "deal" instead. Meng's strongest arguments against extradition had always been on abuse of process grounds, and the hearings on that were about to start when the deal to drop the extradition was reached. The evidence before the court showed that Canadian police and immigration officials had been doing illegal favours for their US counterparts, police were suddenly reversing their testimony and contradicting their written notes, and one of the key senior police witnesses had left the country (to go to Macau!) and had hired a lawyer to try to fight having to testify. There were a lot of people who may have found to have been involved in a lot of unsavoury activities if the hearings had proceeded. This sort of thing probably goes on all the time, the police just weren't used to dealing with someone who had the money to hire the lawyers to fight it.

A series of senior retired Canadian diplomats and cabinet ministers had advised Ottawa that they had sufficient grounds to toss the case on final review (this would have followed a judges decision). Trudeau however was desperate to avoid getting dragged into the issue because he had just taken a major kicking at the polls over the SNC legal scandal and wanted to avoid anything which might remind people of that, regardless of whether it was justified or not.

Canada had been pressing the US to make some sort of "deal". Trump however was not willing to accommodate him, although Trump's offer to China to release Meng in return for a good trade deal would likely by itself have provided grounds for Canada to refuse extradition at a later point in the process.

However, Trump was gone, and Biden was apparently more willing to be accommodating to Canada and so a deal was finally done. It was widely suspected that this was a quid pro quo in return for Canada not making a fuss over the cancellation of the Keystone XL pipeline.

Alibaba, Tencent enlisted to help sanction-weary China build RISC-V chips

thames

Re: RISC-V fragmentation

When you look at what RISC-V is currently mainly used for, which is embedded devices, and when you compare those to the ARM equivalents, you see that ARM is just as fragmented. It doesn't matter though, as they are single purpose devices.

When it could be an issue is when RISC-V starts being used in phones, servers, PCs, etc., where the user acquires software separately and wants to run it. There, ARM has only recently started sorting itself out after a good deal of effort put in by various Linux trade groups.

What they can do with RISC-V is learn from the mistakes that ARM made and design chips that adopt similar solutions to ARM for things like device discovery in servers.

I hope by the way that nobody is under the impression that there isn't fragmentation in the x86 or ARM markets because both of those require a programmer to jump through a lot of hoops and do a lot of testing if you want to use anything other than the lowest common denominator features.

AI giant Baidu shrugs off US chip export restrictions as having 'little impact'

thames

Re: No Surprise

Anyone in China who wants to get into the chip fab equipment business now has a guaranteed market protected from outside competition.

Give it five or ten years and the Americans will be filing trade complaints about unfair Chinese restrictions on imports of chip fab equipment.

Meta links US military to fake social media influence campaigns

thames

It's the same sort of reports that we get when talking about alleged Russian, Chinese, or other operations. In this case it just says "US" instead of "Russia". If it's good enough for the one set of reports it's good enough for the other.

We know the US engage in this sort of thing because they've admitted it in the past. The UK do the same by the way, using 77 Brigade (one of their main specialties being social media "influence operations").

As for what "associated with" means, without access to the relevant HR records Facebook can't exactly tell us whether the people involved are salaried civil servants, military personnel, military reservists on duty, or outside contractors.

I too would like to see more detail on this sort of thing, but among other things I imagine that managers at Facebook don't want to have their collars felt by the US authorities for revealing too many operational details.

Qualcomm faces fresh competition in world of Arm-based Windows PCs

thames

Windows on x86 is the new mainframe.

Apple has shown that it's possible to make an ARM CPU for PCs that has reasonable performance.

The issue with aiming for the Windows market is that the amount of third party software, especially business critical software, is far, far, larger for Windows than for Apple. And that x86 software is in running systems that are not easily replaced even if it is technically possible to port to ARM. Microsoft makes most of their Windows money from licensing and support contracts with businesses who are tied to x86 by all of this software.

Microsoft Windows on x86 is the new IBM mainframe. It may not be the future, but it's not going away any time soon, and it will continue to provide a steady cash stream to Microsoft.

LockBit suspect cuffed after ransomware forces emergency services to use pen and paper

thames

Re: At least they caught one

The gang seems to have been based in Ukraine and at least some of the others were arrested last year. Vasiliev seems to have evaded the initial rounds of arrests because he lived in Canada.

He's an idiot. He would have known that the authorities were after the gang when the others were arrested in Kiev so he should have dropped all connections with it then and worked to cover his tracks. His second warning should have been when Canadian police raided him during the summer looking for evidence. He just kept at it though and there was loads of evidence laying about the place.

India's Home Ministry cracks down on predatory lending apps following suicides

thames

Is this related to the recent problems with organized IT crime elsewhere in Asia?

It would be interesting to know if these are the notorious Chinese crime gangs that have been operating out of Myanmar (Burma) and Cambodia which have also been in the news recently.

These crime gangs have been running rampant across China, Hong Kong, and much of the rest of east and southeast Asia with telephone scams, cryptocurrency scams, and pretty much everything else that can be done remotely. They get workers from all across Asia (including India) by promising them well paid jobs in legitimate businesses and then holding them prisoner until they make enough money from scams to pay back a specified "debt".

Most of them seem to operate from free trade zones in Myanmar, with Cambodia being another major location, but they also have offices in places such as Dubai.

The Chinese government have been trying to get to grips with them by telling expats in certain countries to return home and provide evidence that they are engaged in legitimate business. If they refuse then their property in China will be seized and life will be made difficult for their families (e.g. cut off from government benefits). They seem to congregate in countries which don't extradite to China (either because of no treaty or because the legal system doesn't work).

The main problem seems to be that in Myanmar at least the crime gangs seem to operate under the protection of the Myanmar military who get a cut of the action. The main operating location is in a free trade zone across the border from China. I think the free trade zones (which also have legitimate businesses) get Internet connectivity via China and so have access to Chinese IT hosting services and the like through various cut-outs and fronts.

I don't know if the connection is there, but it occurs to me that it may not be a coincidence that the problems mentioned in India are occurring at the same time as the other problems across east and south Asia. I suspect that countries across Asia may need to cooperate to squash this problem or else it will just move around from legal haven to legal haven if countries act individually to try to squash individual manifestations of it.

Page: