* Posts by thames

1110 publicly visible posts • joined 4 Sep 2014

Page:

Venturing beyond the default OS on Raspberry Pi 5

thames

Re: Raspberry Pi Imager with Other OS

You're using a lot of words to simply object to me calling the Raspberry Pi OS GUI "cut down". It's based on a modified LXDE, the latter of which is specifically designed for low RAM usage. This was selected in order to run adequately on a 1GB Pi.

I made no mention of Xfce, Budgie, MATE, or KDE, because I haven't tried any of those on a Raspberry Pi. I can't speak about Xfce as I'm not as familiar with it, but Budgie, MATE, or KDE are not designed for low RAM usage so I don't know why you would think I was even referring to them by implication. They are all relatively similar to Gnome in terms of features and size.

If you like any of those better than Gnome, then that's fine. The point which was being made is that if anyone is using an 8GB Pi (I haven't tried a 4GB model), then there is no technical reason why an OS which uses Gnome can't be on the list of options. The Pi seems to have the horsepower to run it just fine.

Someone else will have to speak with respect to Xfce, Budgie, MATE, or KDE, because I haven't tried any of them on my Raspberry Pi hardware. Someone else again will have to speak with respect to the 4GB models, as I don't have one and so haven't tested them.

My previous comments have been based on my experience running Ubuntu on a Pi4. I recently acquired a Pi5. After my previous post on this subject I was inspired to install Ubuntu 23.10 on a spare SD card (using the Raspberry Pi imager) and try it in the Pi5. It ran fine, with good graphics performance running the standard Ubuntu version of Gnome.

I also tried playing a Youtube video on full screen (1920 x 1080) and it ran just fine, with no glitches or visible dropped frames.

To reiterate my point, whatever Linux OS you decide to run, the Raspberry Pi 5 with 8GB of RAM will likely have more than enough CPU and GPU performance to run nearly any mainstream Linux distro which has a Raspberry Pi ARM version, and it will be indistinguishable from running it on an x86 PC.

thames

Raspberry Pi Imager with Other OS

With regards using the Raspberry Pi imager with other operating systems, I've used it with Ubuntu on a Pi4 for a number of years no problems at all. You just pick Ubuntu as the OS you want to install and it will download and install it on the card for you. About a dozen different versions of Ubuntu are available for the Pi, and they were one of the very earliest of alternative OSes available for it.

The imager also offers Apertis (some sort of Debian based embedded OS) and Risc OS Pi. These are in addition to the various versions of Raspberry Pi OS and application specific OSs (games, media, etc.).

I've used Ubuntu for some years as a server OS for testing ARM software that I have written. It's always worked just fine. I originally picked Ubuntu because there wasn't an official 64 bit version of Raspberry Pi OS at that time, but there was one for Ubuntu.

I also have a spare SD card set up with a desktop version of Ubuntu that I used for a backup in case my PC died. During the pandemic I relied on it for a few days while waiting for a spare part for my PC, and found that a Pi4 was adequate for normal desktop use provided I wasn't trying to watch full screen video over Youtube. The latter issue appears to have been due to GPU limitations on the Pi4. Ubuntu desktop (Gnome) apps though worked just fine, and I didn't notice any difference from using an x86 PC aside from that the Pi booted up much faster than any PC that I've seen.

If your Pi has 8GB of RAM (I haven't tested with 4GB), I don't see any technical reason to use a cut down GUI as opposed to standard (Gnome based) Ubuntu GUI. The Raspberry Pi OS doesn't seem any faster than standard Ubuntu in terms of GUI performance. Of course if you want to play around with other things then there's nothing wrong with that.

Year of Linux on the desktop creeps closer as market share rises a little

thames

We've been through this before

I can recall when WordPerfect and Lotus 123 were the corporate standards on every business desktop and most office workers where I am had never heard of let alone seen Microsoft Word or Excel. We used email and calendaring software from somebody else whom I can't recall at this point, and project management software from some other company whom I also can't recall. Accounting had a set of custom Lotus 123 macros which were deeply knitted into their accounting process.

However, the large manufacturing company that I worked for at the time got sold from one multinational to another multinational, and the new owners had a blanket license deal with Microsoft covering a wide range of products, including everything on the corporate "standard desktop". About a year after the company changed hands the edict came down from above that we were all to change to the new standard for cost saving reasons. The new bundle from Microsoft was slightly cheaper to license than the old bundle from a collection of other vendors.

The general opinion of the users was that all the new stuff was worse than the old stuff and nobody knew how to use the new stuff. The new stuff from Microsoft was slow, buggy, hard to use, and didn't have all the functionality of the old stuff.

However, none of that mattered. The new stuff was deemed to be cheaper as a bundle, so we were changing whether the users liked it or not (not that anybody actually asked us of course). This shouldn't be too surprising, as it's no different from how any other business decision was made.

Training was not an issue. If you needed training you were given the phone number of a company which did training and you could go get yourself trained on your own time (the company would pay for it though). If you couldn't get use to the new stuff, well, you could be easily replaced.

Your work quota of course would not be reduced by one iota during the transition. Just work longer (for no extra pay) if you were having any trouble with the transition.

Accounting depended heavily on Lotus 123 macros. They got told "sucks to be you" and were given the phone number of a local consultant whom they could hire to rewrite their macros to work with MS Excel.

I don't know how hard it was for the accounting people, but everybody else in engineering, sales, logistics, etc. figured things out for themselves and within a few weeks any remaining problems had faded into the background noise. The main problem remaining was that Outlook was buggy as hell and email and calendaring were not as smooth and as well integrated as the old solution (and still hadn't caught up years later when I left that company). We just had to live with it though.

So, we've been through this before, and the usual reasons cited by people for why we can't switch from Windows to Linux were not seen as any sort of barrier when we switched from non-Microsoft products to Microsoft products when the latter were seen as being a cheaper alternative to the then industry standards.

The real reason that Microsoft is dominant on the corporate desktop has to do with they have the global business connections to make these sorts of deals with businesses and developers. None of the large "Linux" companies are really interested in tying up their capital to duplicate these business connections in what they seen as a stagnant "legacy" market when the real market growth is elsewhere.

Windows is the IBM mainframe of the desktop. It's not going away any time soon, regardless of how much better the alternatives are (and Linux, at least in the form of Ubuntu, is definitely better on the desktop than Windows from a user perspective).

New York Times sues OpenAI, Microsoft over 'millions of articles' used to train ChatGPT

thames

Re: It's all about profit

You don't have to register your copyrights in the US in order to sue for infringement. Registration just affects the sort of damages you can claim.

If your copyright is registered you can claim statutory damages (an automatic amount) without having to prove actual damages (how much it really cost you). If you want to claim more damages than the statutory amount you can, but you have to offer proof of the value of the loss.

If your copyright is unregistered then you cannot claim based on statutory damages and have to prove actual damages, which means showing proof that you actually lost money due to the infringement.

What registration of the copyright does is basically make it easier for large companies to sue small infringers because they don't have to prove that the infringement actually cost them any money.

China's Loongson debuts processor that 'matches Intel silicon circa 2020'

thames

Re: Forget performance, what about availability and documentation?

According to the press release it runs several different Linux distros, including Kirin, Euler, Dragon Lizard, and Hongmeng. The first one may be an Ubuntu derivative for desktop, but I'm not entirely sure as there are multiple projects with similar names. the second two are CentOS derivatives from Huawei and Alibaba. The fourth is an open source Android derivative from Huawei. They're all existing current Linux distros that have simply been ported to Loongson.

There's also a big range of development tools ported to the architecture, including GCC, LLVM, Go, Rust, Dot Net, etc. There are audio and video accelerator codecs, They said they are working with and contributing code to nearly 200 international open source communities, they're not working in isolation on this.

thames

Re: Not big

Loongsoon hasn't been big in China because Intel and AMD chips had been available for import at competitive prices.

However, now that the Americans are embargoing sales to China, Loongsoon has been effectively granted a protected market to grow and develop in. The Chinese market is huge, so even if they don't see a lot of export sales (outside of embedded applications), they can still sell a lot.

If the Chinese government had been the ones to make the decision to exclude Intel and AMD from this market segment in order to promote sale of Loongsoon CPUs, the Americans would have been the ones complaining about protectionism.

thames

Re: Fake benchmarks though

Neither the author of the story nor the presenter at the conference made any claims about their 2.5 GHz chip matching an Intel 6 GHz chip in terms of performance. The story clearly states that it was being compared to "a comparable product from Intel's 10th-generation Core family, circa 2020". Anyone with a technical background would know that they would be comparing chips of a similar clock rate. Suggesting otherwise is being rather disingenuous.

The conference statements (linked in the story) show that the chip at introduction was being targeted at the broader desktop market, rather than the top end niche gaming computers which Intel said their fastest chips were aimed at.

The announcement title also suggested that they are aiming the first chip in this series at the mid-range PC market. They are clearly looking at mass market PC sales. They have server chips under development which will be announced later. If you want to see how their fastest server chips do, you'll have to wait for those to come out.

You have provided no evidence that any of the benchmark results were "fake" as you claim. You have just done a lot of hand waving to distract from the fact that the developers appear to have been able to design a chip which is competitive in technical terms in the market for which it is oriented.

Commercial success is different from technical success, so we'll have to wait and see how well this chip sells in the Chinese market and abroad, particularly outside of government sales.

I suspect that we will be seeing similar announcements coming out of India in about 10 years time or so, except based on RISC-V. They have similar ambitions as China with respect to IT technology independence, and for similar reasons.

Will anybody save Linux on Itanium? Absolutely not

thames

Many years ago I worked for a company that used DSP accelerator boards from a major vendor (number one in their field) in PC based test equipment that we built. The DSP chip was the DSP32C from AT&T. I wrote the software, and benchmarks showed that we could only meet the required performance in terms of how long a test took by offloading the heavy data array calculations from the PC running the overall system to the DSP board.

A few years later I was at a seminar run by our hardware vendor about their new products. They told us they were dropping the DSP board product line as it was no longer really necessary. The latest mainstream x86 CPUs had gotten faster, and more importantly they now had SIMD instructions. It was the SIMD instructions in the DSP which had made it so fast. I later did some benchmarks on newer hardware and found this was so.

One big advantage of integrating the SIMD in the general purpose CPU by the way is that you no longer lose so much time shuffling arrays of data back and forth over the bus as you do different sorts of operations on it.

There was still a market for specialized DSP chips, but it was increasingly in certain specialized embedded applications where close integration with application oriented hardware features was important.

Royal Navy flies first mega Mojave drone from aircraft carrier

thames

Re: Probably the future of carrier operations

The immediate use case driving this current project is to find a replacement for Crowsnest (radar mounted on a Merlin helicopter) by 2030, which is when Crowsnest is scheduled for retirement.

The current plan is to have drones take over a lot of the routine monitoring and surveillance jobs that manned planes would have to be otherwise used for, freeing up the manned planes for more complex jobs. The latter includes both F-35s and Merlin helicopters. Drones have lower operating costs than high performance manned aircraft, and it's the operating costs, not the purchase price, which dominate the overall lifetime costs.

The issues being looked at include factors such as how well the model being evaluated will take-off and land in all sorts of weather and sea conditions, and how to deal with safety issues such as making sure the drone doesn't crash into parked aircraft on the deck in the event of a bad landing. Since there's no pilot, I imagine the options for ensuring the latter are probably a lot more "robust" than would the case if there were a pilot in the aircraft.

GNOME developer proposes removing the X11 session

thames

Gnome need a reality check when it comes to Wayland's problems

I wouldn't care whether I was using Wayland or X if it wasn't that some really basic essential features in Wayland were still not working and if there weren't Wayland related bugs in things as simple as text editors (which would appear to be Gnome Wayland bugs).

I used Wayland by default with Ubuntu 22.04 until I got tired of having to log out and log back in with X whenever I needed to use an application which wasn't supported under Wayland. Now I just use X all the time and don't miss anything that Wayland supposedly offers.

I don't care about remote desktops. I don't care about multi-monitor support. I do care though about basic features which Wayland developers have acknowledged for many years as being something they needed to address but never seemed to get around to for no discernible reason.

I suspect that what will happen is that Gnome will follow their usual practice and just make Wayland non-negotiable whether it's ready or not. Then Fedora will adopt the new Gnome while nearly everyone else sticks with an older version of Gnome. Most application developers will go with the majority market share and do whatever most distros do, which will be to stick with an older version of Gnome.

Red Hat was able to ram Systemd and Pulse Audio down everyone's throats because they didn't affect most desktop users directly. For Wayland though, the deficiencies are serious enough to be absolute show stoppers for many people.

Microsoft says VBScript will be ripped from Windows in future release

thames
Meh

That brings back bad memories.

I used it once, 20 years ago, to hack together some scripts for manipulating and reporting on the text file output of something else. If I had been using a Linux system instead of Windows system I would have used awk, but it was Windows and I wasn't allowed to install anything that wasn't already there.

My memories of it are vague, but I do recall that it was ... horrible. I had no desire to repeat the experience.

I suspect that it won't be missed except by a small number of people who have inherited some ancient scripts that will have to be reverse engineered and rewritten in something else less obscure.

Nuclear-powered datacenters: What could go wrong?

thames

Canada is looking at very small SMRs for powering things like remote communities or major mines in order to replace diesel generators.

However, for grid connected utility use, the 300 MW size seems to be favoured. Regardless of the size, you still need site preparation and civil works (e.g. cooling), which have economies of scale.

The original idea behind SMRs was to have something that would be largely assembled in a factory to a standard design and then shipped to the site with minimal assembly there. This would result in faster design, licensing, and construction and so reduce capital costs. If they can do that with a 300 MW modular design, then there's a big advantage to these over smaller ones in terms of cost. The emphasis after all is on the "modular" part and the "small" is just a means to get there.

All of the really small SMRs that I have seen designs for use very expensive proprietary fuel assemblies, generally either highly enriched uranium or using plutonium from disassembled weapons. The business model seems to be based on vendor lock-in to proprietary fuel.

Canada isn't pinning all of it's future plans on SMRs by the way. Future plans are still open to larger reactors similar to those already in use (850 MW or larger), depending on how things work out with the SMR designs.

All of this suggest though that data centre operators may be biting off more than they can chew if they try to become mini-utilities. They should instead just locate in countries or regions which have reliable sources of electric power.

thames

While nuclear power is undoubtedly at the start of a period of strong growth due to things like electric cars (see for example the huge new announcements in Canada), I have to question the economics of building a small nuclear power plant just for a data centre.

The small modular reactors announced in Canada are 300 MW and there will be four on a single site. That's 1,200 MW. I'm not aware of a data centre that draws that sort of power. There are economies of scale for civil works for things like cooling which make siting several medium size reactors together much more economic than small isolated ones. There are also issues with respect to making efficient use of skilled personnel which make grouping reactors together more sensible.

I suspect that relocating the data centres to areas where there is a reliable supply of electricity would make much more sense than building small generating plants just for data centres in areas with unreliable or limited generating capacity.

Data breach reveals distressing info: People who order pineapple on pizza

thames

Re: food abomination

I saw chicken tikka masala pizza in the freezer section of a supermarket in Canada a couple of days ago. Then again, pineapple on pizza originated in Canada in a place not too far from where that supermarket is located.

In case any Indian intelligence operatives in Canada are reading this, I will hastily add that I've never eaten either so don't murder me.

GNU turns 40: Stallman's baby still not ready for prime time, but hey, there's cake

thames

Minix 3

El Reg said: "A more successful example – but also still rather incomplete – is Minix 3 ..."

The public server for Minix 3 has been more or less dead since 2016 or 2017 and no apparent development since then. There has been talk by verious people about reviving it, but it doesn't seem to come to much.

US AGs: We need law to purge the web of AI-drawn child sex abuse material

thames

Some wrong assumptions are being made

Deepfakes and AI generated images are two completely different things. There should be no need to use live model child pornography to produce AI generated child pornography.

All the creators should need is images of pornography models who are of legal age but have slender builds, and non-pornographic images of children. They could then use generative AI technology to blend the two together.

I suspect that any law based on the premise that AI generated pornography depends on live model child pornography would fail in court if the defendant kept a good record of what data was used to train the model and could show that no live model child pornography was involved.

I suspect that the US will have to address the issue by either having a law that says that if it looks like child pornography, then it's child pornography even if no children were involved (which is what some countries do), or else deal with the issue of what constitutes a derivative work when AI is involved together with some privacy laws that have some real teeth and which can prevent images of children being used for unauthorized purposes. Then add on top of that a requirement for all image related AI models to keep a record of what training material they used.

In the end though, as AI oriented hardware acceleration becomes more mainstream I suspect that attempts to prevent illegal uses of it will become futile.

Microsoft teases Python scripting in Excel

thames

Pandas and Anaconda

Given the repeated references to Pandas and Anaconda, I suspect that what this is really about is using Excel as a front end to Pandas (a major Python data analysis library).

Pandas is so important and so widely used in certain fields of application (data science, machine learning) that people are learning Python just in order to be able to use it. People who had previously used Excel for certain small to medium size tasks are dropping it in favour of Pandas when Excel starts to run out of steam on really big tasks. The data and calculations will reside in Pandas, while Excel is used for data entry and reporting.

Given this, the way that Microsoft can respond is to build Pandas access into Excel. In order to do this they need to add Python scripting into Excel. They then run the calculations in the cloud because the really lucrative (for Microsoft) applications will be ones which are too big to run on a PC.

I suspect they will eventually offer tiered accounts, with prices escalating with cloud instance size.

AVX10: The benefits of AVX-512 without all the baggage

thames

Re: flags

My software is an SIMD library. The problem is lack of published model specific data on the actual performance of individual SIMD instructions so this can't be solved by writing a library. Actual performance doesn't match up with theoretical performance. Sometimes the SSE4.2 version is faster, and sometimes the AVX version is faster. Sometimes the SIMD version is no faster than the non-SIMD version, or only marginally faster. This is CPU model specific behaviour, not something you can just make assumptions about by reading the vendor's published instruction documents.

The only solution is to benchmark each instruction with each integer and floating point data type on actual hardware for each CPU model and I can't afford to buy (and have no room for) every CPU model ever put out by Intel and AMD. The vendors don't publish model specific benchmarks and there is no authoritative third party published data that I am aware of which gives the answer to this problem.

Now multiply this through all the different x86 SIMD systems including mmx, sse, sse2, sse3, ssse3, sse4a, sse4.1, sse4.2, avx, avx2, and avx512. AVX512 itself has a dizzying array of different avx512 subsets as Intel attempted to segment the market in order to extract the maximum revenue for each chip model.

When you look at actual installed base of different CPU models, the only realistic course is to check for sse4.2 and use it if present, and if not, to fall back to a non-SIMD algorithm.

ARM is a completely different story. They have a simple and consistent SIMD instruction set instead of the absolute train wreck on x86.

thames

Re: flags

x86-64 SIMD is an utter shambles. There are so many different versions an variations in different CPU models.

I had a system which used SSE4,2. I looked at adding AVX support. I therefore spent time coding up AVX versions of the software.

When I bench-marked them though, I found that AVX was actually slower than SSE4.2 despite the theoretical advantages. A bit of googling turned up the information that this was a known problem, but was CPU model dependent.

Given the impracticality of implementing and bench-marking the AVX software on every model of x86 CPU out there, I binned the notion of using AVX and went back to SSE4.2 At least I know that it works.

Chinese media teases imminent exposé of seismic US spying scheme

thames

Re: I'm very dubious about this

Err, there's an explanation right in the Global Times article which El Reg references in the second paragraph of their story.

Here's one example of how seismic data can be used for nefarious purposes:

"By obtaining relevant data from seismic monitoring centers, hackers can deduce the underground structure and lithology of a certain area," the expert said. "For example, it can be inferred whether there is a large underground cavity, and thus whether it might be a military base or command post."

Here's another:

Seismic intensity data is closely related to national security, for instance, some military defense facilities need to take into account factors such as seismic intensity, experts said.

So hypothetically you hack into a country's seismic monitoring system and wait for an earthquake. You then analyze the data to get an idea of the regional geology and how it relates to large underground command posts or aircraft shelters (whose location you find by other means) which you then use to decide how big of a nuclear bomb to use on that target. Then when you decide to start WWIII you have a slightly better idea of what to nuke and how.

What is it you imagine that the US do when they spend many, many, billions of dollars per year on their offensive "cyber" operations? The NSA (etc.) spies don't all spend their entire day playing video games and posting on Facebook. They spend at least some of it hacking into all and sundry, as the Germans (and others) have found out to their cost as one very well known example.

Oracle, SUSE and others caught up in RHEL drama hit back with OpenELA

thames

Re: I'm against this, but wait...

Red Hat are a company whom you can pay money to in order to have them provide support to you.

Debian are a community free software distribution. There is no Debian company selling support services. There are however companies distributing derivatives of Debian and providing support for those, such as Canonical (Ubuntu).

The key difference between RHEL and Debian is that RHEL's "upstream" (Fedora and Centos) is controlled by Red Hat who can dictate policy to them while there is no commercial company controlling Debian.

Debian are known for stability and being cautious about releasing new features in the stable release series. The majority of Linux distros are Debian derivatives (revenue is another story).

If OpenELA want to really make a mark in the Linux distro world, they could look at releasing an OpenELA distro with binaries, rather than just being a set of git repositories. They could then be line Centos was. Commercial support would still come from OpenELA members, based on their own binaries. OpenELA could then be a "Debian" equivalent.

Perhaps they have this in mind as their eventual goal. We'll have to wait and see.

thames

Re: founded to create continuity for all Enterprise Linux downstream distributions

Red Hat have already restricted RHEL source access to Oracle, Suse, etc. This is why the latter have formed this organization.

At this point it's still not clear what it was that Red Hat were hoping to accomplish. Was this a reaction to loss of market share to Alma, Rocky, Oracle, and the like? Or do they have plans for big price increases and want to make it a bit more difficult for existing customers to jump ship (or threaten to jump ship) to the alternatives?

Somehow this has to be connected to money, but I'm still not sure just how.

Overall though this Red Hat plan has to be related to distros which are exact clones of specific RHEL releases and not ones which are Fedora or Centos derivatives, because the latter haven't changed policies.

Microsoft: Codesys PLC bugs could be exploited to 'shut down power plants'

thames

Re: Codesys

Almost nobody doing work on actual PLCs (I can't say what it is for Codesys) cares about anything other than ladder or instruction list (IL, a sort of assembly language-like code).

State machine, sequential function chart (SFC or grafcet), and other flow chart type systems are usually add-on code generators which spit out unreadable and horrifically inefficient ladder or IL. Most people programming PLCs have never seen them and most probably don't know what they are.

I've used state machines and grafcet as design and analysis tools, but always manually translated that into ladder for actual implementation. State machines are good for visualizing some problems and SFC for others, but real world systems generally can't be done properly using either exclusively. You generally need a mixture of ladder (or IL) in with the flow chart type language. If the IDE system can't do mixed programming then the flow chart is useless in practical terms and you are better off sticking with straight ladder.

As for your amateur PLC botherer, knowing about the scan rate limit on a PLC is basic beginner knowledge. On most PLCs though the PLC will fault (shut down) if the scan rate limit is ever exceeded. You can then use the programming software to read the fault error message out of memory to see what the cause was. Usually though you'll run out of memory before you can write enough code to exceed the scan rate.

Codesys though isn't actually a PLC, it's an IDE that lets you write embedded software in a manner similar to how you would do it on a PLC. I don't know how it implements concepts like scan rate. Quite possibly it does it rather badly.

thames

Codesys

Codesys is not in itself a PLC. Rather, it's software which can be used to create programs which are written in PLC style and which are then compiled and run on an embedded system. It's mainly used by a few smaller companies in specialized equipment.

The mainstream PLC vendors all have their own proprietary systems which they sell as complete hardware plus pre-installed run-time software combinations.

Most people who work with PLCs will probably never run into a Codesys at anytime during their career, and I expect that a majority will never have even heard of it. As a consequence of this, any problems created by this will be fairly limited.

As for IEC 61131, it's a farce. It was a vendor driven process and they simply declared that anything and everything was standard, so it provides no cross-system compatibility or portability. Porting a PLC program means a complete re-write.

Shifting to two-factor auth is hard to do. GitHub recommends the long game

thames

oathtool

The deadline is end of 2023 now? Last year the absolute final deadline was end of 2022.

I made some notes on this last year, and based on that their preferred 2FA method is TOTP.

There is a command line program called "oathtool" which will run on my PC. I can integrate that into my git bash scripts and also add a simple GUI front end via zenity to use for web log-ins.

For example:

oathtool --totp 01234567

Oathtool is in the Ubuntu repositories, and the same probably goes for most mainstream distros. So far as I am aware, the phone app just generates a TOTP key just like oathtool, so there's no need for me to use a phone or other separate hardware.

Linux has nearly half of the desktop OS Linux market

thames

Re: If ChromeOS is Linux...

You had problems getting it to install? It's fussy about hardware? Now that's real, old style Linux right there. I suddenly feel nostalgic about my early days with Linux.

AlmaLinux project climbs down from being a one-to-one RHEL clone

thames

Decisions, decisions

I have serious doubts that Rocky Linux will reliably be able to continue with an exact clone of RHEL. Red Hat can just keep tweaking their T&Cs to exclude them, leaving Rocky scrambling to try to find another loop hole.

This fragments the "RHEL compatible" market, including for Red Hat itself. I tested my (free/open source) software on Alma. Will it run OK on the current release of RHEL? Who knows? I'm certainly not going to sign a contract with Red Hat in order to find out.

Without the derivatives being exact clones, it raises the question of just what is the raison de'etre for any of the RHEL clones? They can say they are testing applications, but they won't have copies of the proprietary enterprise software which is tying customers to Red Hat, so how will they do that?

As I've mentioned before, I have several minor free/open source projects which I maintain. I've been testing them on Centos, and later on Alma Linux as substitutes for RHEL. If there are no exact RHEL clones, then should I even bother testing on RHEL equivalents or should I just delete that VM and save some time and disk space? I test on various other LInux distros (and BSD), so perhaps I should leave it at that and tell any RHEL users that there's nothing I can do if they have any problems.

I'm sure that Red Hat aren't going to lose any sleep over whatever decision that I make (I doubt they know that I even exist), but it does make me wonder how many other developers are in the same boat and will make the same decision.

Microsoft and GitHub are still trying to derail Copilot code copyright legal fight

thames

Re: Copying or transformative?

The issue will likely come when the project you are working on resembles only one or a very few original works that the model was trained on. The model will likely then spit out suggestions that are very close to the original work.

The biggest problem of course is that due to the way the system works, you will have no idea when this is happening.

To take your art analogy further, imagine a very unimaginative art student whose art education consisted of being shown a selection of paintings in a museum. Now suppose you tell this budding artist to produce a new landscape containing windmills. However, that art student has never seen an actual windmill himself and had only ever seen two museum paintings containing windmills. The resulting new painting will likely contain recognizable copying from the originals. This will be the issue with software.

Going by the way that the copyright system works, I would not be surprised if Microsoft are found to be not liable, but that that the people who use their Copilot service are instead found to be shouldering the full liability on themselves for any resulting copyright violations. This is because Microsoft will be feeding customers small fragments at a time while the customer is the one assembling them into an infringing work.

Software copyright law is based heavily on copyright for books, movies, and other entertainment and educational media. Eventually generative AI will get good enough for it to be applied to that industry (it's already being used for stock images, with resulting controversy there as well). So imagine feeding 50 years of American TV situation comedies into a model and using them to produce new ones based on broad scripts without the original studios collecting royalties on them. It's not going to be allowed to happen and laws will be changed if necessary to ensure that it doesn't. Software will be affected as a byproduct of this as well.

thames
Boffin

Exact cloning is not required for it to be a copyright violation.

"arguing that generating similar code isn't the same as reproducing it verbatim"

US copyright law doesn't require exact copying for it to be a violation of copyright. For software there is a process of evaluation involving distilling the code down to its essentials and comparing it that way. That way you can't for example simply change the variable names and formatting to avoid copyright. If that were allowed then you wouldn't need AI to get around copyright law.

The real issue will likely come down to whether each bit of copying had enough code copied for it to be considered significant enough.

A common analogy is that you can't take the latest Harry Potter novel and simply change the character and place names, add a few scenes from another book, and then publish it as your own work. It's still considered to be a copy even though it's not identical.

Google accused of ripping off advertisers with video ads no one saw. Now, the expert view

thames

The dispute appears to be that someone is claiming that significant numbers of ad slots that were sold to advertisers as "in stream ads" (ads shown as part of a Youtube video) are actually being shown as "out stream ads" (ads shown as small pop-ups with the sound muted on text content sites). The former is considered to be more desirable means of placing ads than the latter.

I suspect that what may be happening is that Google are selling ad placement packages which include a mixture of both (a quick reading of related material suggests that this is what ad consultants were recommending their clients to buy). I suspect that Google's ad salesmen were promising their customers butterflies and rainbows of what is "possible" but the fine print on the contract was vague and waffley on what would actually be delivered. Customers were therefore found themselves underwhelmed with the difference between what was promised by the marketers and what was actually delivered.

If this is so, then it looks like the ad men at the customers were getting a taste of their own medicine and I don't have much sympathy for them.

Personally I think that in stream ads (pop-up video ads that somehow get around standard browser auto-play blocking) are horrific and only bottom feeder low quality sites have them. Admen have been buying out stream ads because if provides more potential sites on which video ads can be shown, and they think that video ads are "better" than normal image/text ads. However, it sounds like out stream ads are a flop in terms of effectiveness and advertising directors who bought them are losing their bonuses and are looking for someone to blame.

Canada plans brain drain of H-1B visa holders, with no-job, no-worries work permits

thames

Re: For once, Trudeau's government has made an actual smart move.

This is not an equivalent to the US H1B visa. This is a one time block of 10,000 regular open 3 year work permits being opportunistically targeted at a particular pool of people. They can work for almost any employer anywhere in Canada and their spouses and children can get work and study permits as well.

Canada has loads of immigrants from India, so we are quite familiar with evaluating their qualifications.

thames

Re: Short term visas

This is completely different from a US H1B visa. This is simply a one time block of 10,000 work permits which are being targeted at certain US H1B visa holders who recently lost their jobs in the US or are concerned about losing their jobs. It's not an equivalent to the US H1B visa.

Under the terms of the Canadian program their spouses and children can also get temporary residency visas and work and study permits.

This particular work permit is a three year open visa and you can change employers. Once you get a work permit in Canada and have regular employment, applying for permanent residency is fairly straightforward. Once you have permanent residency you can get citizenship in 3 years.

People who are successful under this work permit will be encouraged to remain in Canada with their families as permanent residents or citizens. However, Canada is looking for people with education, skills, and qualifications, not just letting anybody in. Immigration is primarily intended to be used as a tool to support economic development policy. The Canadian immigration system has a record of conducting strategic "raids" on particular pools of labour on an opportunistic basis, so this one isn't a big surprise.

Rocky Linux claims to have found 'path forward' from CentOS source purge

thames

Re: "Certified"

Some of the people inconvenienced by this are people with Free Software projects who test their software before release. I have several minor projects which I run through automated tests on a variety of distros (and BSD) before each release.

Red Hat have a developer program which could theoretically get me a free copy for testing, but I'm not going to assume the legal liabilities which come with signing the licenses. These licenses include terms which do things like agreeing to subjecting myself to the jurisdiction of a foreign court and agreeing to reimburse Red Hat for the cost of license audits, etc. No other distro that my projects support require me to do anything similar.

I'm currently using AlmaLinux. If they can't come up with a reasonable work-around, then I will simply drop Red Hat clones as a testing target.

Centos Stream and Fedora aren't viable substitutes as they are not the same as RHEL and the whole point of testing on a specific target (as opposed to "but it works on my PC!") is to duplicate a user's operating conditions.

I don't imagine that Red Hat will care one way or the other about me in particular, but other people in the same boat will likely be making the same decision as well.

Red Hat strikes a crushing blow against RHEL downstreams

thames

So sort of like the mainframe was at one time? How very IBM. How's IB-Hat's mainframe business doing these days by the way?

thames
Thumb Down

Not very developer friendly

I have a few minor open source projects in several languages which I distribute via the usual avenues for such things. Before I release a new version I run an extensive automated test suit run on roughly a dozen different distros (including BSD) on x86 and ARM.

One of these test platforms has up until now been a Red Hat derivative. At first it was Centos, and later AlmaLinux.

The point of testing on Centos or AlmaLinux was to look for problems that people using RHEL may have and fix them before release. If AlmaLinux have to stop doing updates and get out of sync with RHEL then there's no point to running tests on it anymore. Fedora isn't a substitute since it is not the same thing as the current RHEL version.

I'm not going to either sign up for a RHEL license or a RHEL developers' license. All signing a developer agreement would do is give Red Hat greater ability to sue me for some incomprehensible reason in a foreign court. Why would I want to do that?

If AlmaLinux can't work around the current problems then I'll simply drop testing on RHEL derivatives. I don't imagine that Red Hat / IBM will even notice, but over time more developers may come to the same conclusion that I have and Red Hat may find that their platform gradually becomes the less reliable one on which to run actual applications.

For now though I'll wait to see what AlmaLinux can come up with.

This malicious PyPI package mixed source and compiled code to dodge detection

thames

Re: Why have pyc files in a package anyway?

The Python "interpreter" automatically compiles each source file as it is imported and then caches the compiled form as a ".pyc" file. This means that the second time it is executed it can skip the compile step and import the byte code binary directly. This can speed up start up time significantly. While the Python compiler is very fast, on large programs it can make a perceptible difference.

Because of this you don't actually need the ".py" file on imported modules if the ".pyc" file is already present.

This isn't something unique to Python, as many other languages have used a similar strategy.

Some people use this as a very weak form of copy protection so they can distribute Python programs to customers without giving them source code. That isn't what it was orignally intended for, it's just a side effect of having a faster start up.

However, this does mean that there is a use case for having ".pyc" files rather than source in a package. This in turn means that having the standard installation tools exclude ".pyc" files would break at least some existing software out there.

The solution is to simply have the code analysis tools disassemble the ".pyc" files and analyze those (the output is like a form of assembly language). A disassembler comes as part of the Python standard library.

Python Package Index had one person on-call to hold back weekend malware rush

thames

Re: Difference between PyPi and NPM?

It's apparently a typosquatting attack. The malicious author creates a new package which has a name which is very similar to a commonly used legitimate package.

When a developer who wants to install a legitimate package misspells the package name he may get the malicious one instead. It then gets installed and used in a project which the developer is working on.

The target seems to be web developers. When the victim tests his project with his web browser, the malicious package injects some JavaScript which looks for bitcoin addresses in his browser's clipboard. It then changes the address to the malicious author's own so that any bitcoin transactions go to the attacker's own wallet.

The attackers are using automated scripts to creat hundreds of new package names which are very similar to legitimate ones, and counting on human error to occasionally get picked due to a typo.

This is a problem inherent to any repository which is not manually curated, regardless of the language.

The most straightforward solution is to only install packages from your distro's repos instead of directly from PyPI.

Double BSD birthday bash beckons – or triple, if you count MidnightBSD 3.0

thames

OpenBSD 7.3 won't install in VIrtualBox

I installed both FreeBSD and OpenBSD in VMs on Tuesday. FreeBSD installed without any problems.

OpenBSD 7.3 however crashes during the installation process. I eventually ended up installing 7.2 and then did an upgrade to 7.3, which went fine.

I just repeated the attempt with a new download a few minutes ago before making this post. It's not clear what the problem is. The VM is VirtualBox 6.1.38 on Ubuntu 22.04.

I install all the major Linux distros and BSDs in VMs for software testing, and this is the only one which is giving me this sort of problem. Aside from that, it worked fine once it was installed and running.

One thing that I did notice is that in OpenBSD the Python version has been upgraded to 3.10 from 3.9, while with FreeBSD it remained unchanged at 3.9. This isn't a problem (I welcome it in fact), but it's worth noting that a change like this has been made in a point release.

TikTok: Is this really a national security scare or is something else going on?

thames

Re: TikTok is a smokescreen

I've just skimmed over the actual proposed legislation, and TikTok isn't even mentioned. What it is is a law to allow the US president to arbitrarily ban pretty much anything involved in communications if he doesn't happen to like it. The VPN industry are apparently particularity worried.

Covered are:

  • Any software, hardware, or other product which connects to any LAN, WAN, or any other network.
  • Internet hosting services, cloud-based services, managed services, content delivery networks.
  • The following is a direct quote: "machine learning, predictive analytics, and data science products and services, including those involving the provision of services to assist a party utilize, manage, or maintain open-source software;"
  • Modems, home networking kit, Internet or network enabled sensors, web cams, etc.
  • Drones of any sort.
  • Desktop applications, mobile applications, games, payment systems, "web-based applications" (whatever those are interpreted to mean).
  • Anything related to AI, quantum cryptography or computing, "biotechnology", "autonomous systems", "e-commerce technology" (including on-line retail, internet enabled logistics, etc.).

In other words, it covers pretty much everything in the "tech" business. Singling out TikTok in particular is nothing but a red herring meant to divert attention form what is actually going on.

The bit covering "open source software" is particularly troublesome. People running open source projects may have to seriously think about moving their projects outside of US influence.

thames

Re: "Nations are one by one banning it from government-owned devices"

If there are genuine security concerns then the only apps which should be present on government owned devices are those which have gone through an official security review and been approved as valid and necessary for the device user to perform his or her job.

And for the 95 per cent of the world who aren't the US, Facebook, Twitter, and the like are equally as problematic as TikTok and for the same reasons.

Putting out a blacklist is pointless, as anyone with any knowledge of the subject would know. Lots of apps rely on third party libraries which have data collection features built into them, it's part of their business model. This data is then sold to data brokers around the world with few or no controls over what is done with the data or who it is sold on to. There are so many apps in existence that blacklisting is an exercise in futility.

Don't allow anything on government devices which has not gone through a security review and whose data is hosted outside of one's own country. The same rules should be applied to any business which handles matters which have security implications.

I don't know of any business which allows individual users to install whatever they want on government owned PCs, so why should phones be any different?

The same thinking should be applied to the OEM and carrier crapware that gets pre-loaded onto phones as well. There should be no pre-loaded apps beyond those which have been approved.

Chinese web giant Baidu backs RISC-V for the datacenter

thames

I suspect that Baidu have a list of things they are interested in using RISC-V for, but are not making solid commitments to any in particular on any specific time line until they've done some researching and testing.

Overall though, I expect them to shift to RISC-V over a period of years. Anything else is too much of a risk given the current international environment.

We can probably expect to see India going down the same road for the same reasons, just a few years behind though.

Critical infrastructure gear is full of flaws, but hey, at least it's certified

thames
Boffin

Let's look at a few CVEs

I had a look at the actual CVEs for kit that I'm familiar with and with regards to that kit I'm not too worried with what I saw.

I'll take Omron as an example because their kit is the most mainstream of that listed. They had three CVEs listed against them. One CVE was for passwords saved in memory in plain text. However, in decades of doing PLC programming I've never seen the password feature used even once on any PLC. It's something which a few customers want, but it just isn't used by most. Not all PLCs even have a password feature. In practical terms there's nothing to stop someone from simply resetting the PLC to wipe the memory and loading their own program anyway.

The other two CVEs basically amount to the user programs are not cryptographically signed. That's no different from the fact that I can load programs on pretty much any PC without those being cryptographically signed either. I can write a bash script and run it without it being cryptographically signed. Accountants can write spreadsheets and run them without them being cryptographically signed. Saying that PLC programs should be cryptographically signed is really stretching things a bit.

The main real world security vulnerabilities in industrial control systems have been bog standard Windows viruses, root kits, and the like in PC systems running SCADA software.

My opinion of security for industrial equipment is the OEMs are not security specialists and they won't get it right so they're better off not trying. Also the in service lifetimes of their kit can be measured in decades, which is far beyond any reasonable support period.

More realistically, isolate industrial kit from any remote connections. If you need a remote connection, then use IT grade kit and software as a front end to tunnel communications over and get the IT department to support it as they should know what they're doing, unlike say the electricians and technicians who are often the ones programming PLCs.

Expecting non-security specialists to get security right 100 per cent of the time is a policy that can only end in tears.

Havana Syndrome definitely (maybe) not caused by brain-scrambling energy weapons

thames

Already solved in a Canadian investigation made at the time in question

Canada investigated the problem and fairly quickly found it was due to excessive use of pesticides during the zika virus outbreak in the Caribbean at the time. A US contractor from Florida was used to fumigate the embassies and staff housing with organo-phosphate pesticides.

Due to the panic over zika at the time (the virus was causing serious birth defects), the fumigation was carried out much more frequently than normally recommended. The result of this was that people were exposed to toxic levels of pesticides.

Blood samples of Canadian diplomats found above normal levels of organo-phosphate pesticides. The symptoms associated with this are consistent with these associated with so-called "Havana Syndrome", including hearing sounds that aren't there and the rest. Examination of the patients also found nervous system damage consistent with pesticide poisoning. Organo-phosphates affect the nervous system (some types of organo-phosphates are used as military nerve gas), so effects on the brain are entirely to be expected.

The US government were aware of the Canadian medical investigation but chose to ignore it. Hypothetically this may have been due to concerns about the finger of blame (and lawsuits) coming back on the US officials who approved the use of excessive amounts of pesticides.

By order of Canonical: Official Ubuntu flavors must stop including Flatpak by default

thames

Re: Who cares?

As I mentioned in a post above, the issue is with respect to the terms and conditions for using the "Ubuntu" trademark and financial support in the form of free services. If you want to use the trademark and get official help, then you need to follow the guidelines.

thames

Re: Software Freedom

Derivatives are free to do what they want with the software. If they want to use the "Ubuntu" trademark or get free infrastructure and other benefits though, then they have to stick to Ubuntu policies with regards to official "flavours".

Other distros do take the Ubuntu software and don't follow Ubuntu guidelines, but they don't get to call themselves "Ubuntu".

Try starting your own distro and calling it "Debian" without Debian's permission and you may find that Debian the organization may have a sense of humour failure. The same goes for Red Hat or Suse.

Ubuntu are probably the least restrictive of the major distros when it comes to derivatives.

thames

Re: future of apt on Ubuntu?

Aside from Firefox, Snap seems to mainly have replaced PPAs. It used to be that if you wanted the newer version of some obscure package you needed to find some potentially dodgy PPA and install that.

Now that's done through Snap, there's an official Snap store, and Snap packages are confined so they have limited access to resources.

It's useful for certain things, such as allowing one Firefox package to run on all Ubuntu versions instead of rebuilding it for each version. It's also good for obscure packages that that need to be kept up to date.

On the other hand, it's probably not going to replace the bulk of Deb packages, as there's no reason to do so. I have a handful of Snap packages installed (e.g. the Raspberry Pi imager), and I normally look for a Deb first before falling back to a Snap. In some cases without the Snap I would probably have to build from source.

The main targets for Snaps are actually applications from propriety vendors who traditionally had horrifically bad packages, and games, as game studios don't want to update their packages for each new release. You could also think of Snap as being an alternative to something like Docker when it comes to server applications.

Flatpack on the other hand seems to be pretty badly thought out. It only does GUI apps, doesn't handle server cases, and package management is pretty poor (as described on the article). Snap is what Flatpack should have been, and the only real reason why Red Hat and friends persist with Flatpack seems to be NIH. There's nothing to stop each distro from setting up their own Snap store the same way they do their Deb/RPM repos.

Could RISC-V become a force in high performance computing?

thames

Re: A mixed blessing?

Massive incompatible fragmentation outside of the core instruction set pretty much describes x86 vector support, and that doesn't seem to have hurt it any.

The big question is whether CPUs will be available on low cost hardware equivalent to a Raspberry Pi so that people can test and benchmark their code, or if you have to book time on the HPC system just to compile and test your code. That is what will make the difference.

If you are doing serious vector work you need to use the compiler built-ins / extensions, which are more or less half way between assembly language and C. Good vector algorithms are not necessarily the same as non-vector algorithms, which means you need actual hardware on your desktop for development. This is the real advantage that x86 and (more recently) ARM have, and which RISC-V will need to duplicate.

US and EU looking to create 'critical minerals club' to ensure their own supplies

thames

Re: What about Canada and Mexico?

It's about the electric car market. The US introduced highly protectionist subsidy legislation for the US domestic car market. The goal was to establish US manufacturing sites in the market before other countries got their foot in the door. This was in direct violation of NAFTA rules, and Canada and Mexico threatened retaliation.

Canada then offered the US a face saving formula of "access to critical minerals" in return for watering down the protectionist measures. What exactly that means is anybody's guess, as nobody was seriously talking about export bans to the US (or anyone else) anyway. However, Canada has been using the "critical minerals" phrase (very vaguely defined) in trade talks with a variety of countries (particularly in the Far East). Canada is also very good at finding and exploiting American political weak points, and found a formula that played to American fears about China. Mexico was brought into the plan and the two presented a united front that got the Americans to water down their protectionist measures.

The EU were already unhappy about the new American auto market protectionism and were also talking about retaliation in the form of action through the WTO. This is something which the US would be guaranteed to lose, but would take years to get a final decision on (assuming the US didn't simply ignore it). Once they saw the US reverse course in the face of threatened Canadian and Mexican retaliation, the EU demanded a similar deal. The same "critical minerals" red herring was brought into the discussion for the same reasons.

Biden is as protectionist as Trump ever was, even more so in some ways. This latest venture in the form of the "Inflation Reduction Act" is an absolute disaster for free trade and a major step towards state control of industry. International auto trade would be a major casualty of it if it isn't de-fanged.

Any talk about a "critical minerals club" controlled by the US (or the US and EU) is simply talk. None of the big exporters have any incentive to sell to anyone other than the highest bidder and really aren't interested in being used as cannon fodder by the great powers.

Oh, WoW: Chinese gamers to be cut off from Blizzard games next week

thames

Re: Cause and effect...

Blizzard wouldn't leave the contract renewal negotiations for the last minute, so they will likely have been negotiating for at least 6 months. The statement from NetEase in November would seem to indicate that talks were going nowhere by then.

I would imagine that it's all about money. NetEase have probably been getting bent over a barrel by Blizzard and want more money. Blizzard probably want to pay them less.

Given that there's been no announcement yet about who is going to replace NetEase, it sounds like Blizzard has had even less luck at finding a replacement who are willing to accept the terms that are on offer. If it was NetEase who were the problem, then Blizzard would have found someone else by now.

Should open source sniff the geopolitical wind and ban itself in China and Russia?

thames

Re: Absurd

The article is based on the assumption that the US government will dictate terms to the rest of the world and everyone will fall in line. The word isn't like that. It's worth remembering by the way that the RISC-V organization moved out of the US specifically to get away from this sort of thing.

Once politically based license restrictions start there would be no end to them. Loads of people will start putting in license terms that say things like "cannot be used by anyone who does business with the US military". That's why this sort of thing would backfire on the US.

If you think that's a bit over the top, if you follow the links associated with the "advocate" promoting this idea, you will find that the licenses being promoted as being "ethical" do exactly that.

The Tornado Cash example was a complete red herring, as it's the Tornado Cash company who were being blacklisted for money laundering, not the source code. The Github repo is still on line, but the company's access to it has been frozen. There's nothing to stop someone else from forking the code and continuing on with it.

There's a good reason why proper Free Software has no restrictions on fields of endeavour. Once you bring politics into software licenses the entire field would Balkanise into incompatible licenses and the entire field of software would fall into the hands of a few big proprietary vendors who would base their business on having huge teams of lawyers who can navigate the licensing issues in each country.

Page: