* Posts by thames

1125 publicly visible posts • joined 4 Sep 2014

Code contributions to GCC no longer have to be assigned to FSF, says compiler body

thames

Re: Apple and GPL

Only the copyright owner has the legal right to sue over license violations. If the FSF did not own any substantial copyrights in GCC then Apple/NeXT could have had the case dismissed as the FSF would not have had legal standing in court. This is the main reason the FSF set up the copyright assignment system.

Projects where the copyrights are widely dispersed are more difficult to enforce, as someone who has substantial copyright ownership has to be willing to volunteer to bring the complaint to court.

There is also the problem that there have been a few cases with dispersed license projects where one copyright owner sued when the majority don't want to, but instead wanted to continue negotiating licence compliance with the copyright violator.

By having the FSF hold the copyrights (with a license back to the creator to re-license as he wishes) the FSF got around this problem.

The FSF were pioneers of Free Software, and there was initially a lot of legal uncertainty as to whether any non-proprietary license could be legally enforced. There was also the issue of whether there would be the need to do a rapid change to license terms in case a loophole was found in the license.

After many years though there are enough legal precedents established that this is much less of an issue. The licenses have been tested in court and have stood up as enforcable, and Free Software has become the norm and is well understood from a legal perspective.

This is probably why they are now changing to allow for dispersed copyrights the same as most other Free Software projects do.

The main advantage of widely dispersed copyrights is that it becomes very difficult for any one person or any one organization to stage a takeover of a project, as was done with MySQL (by Oracle). The current environment is very different from the one which existed when the FSF were first starting out. In essence, by largely achieving their objective they have at least partially made themselves redundant.

thames

Apple and GPL

Technically it was NeXT who developed Objective C. Apple/NeXT's problem with GCC is that they wanted to keep their Objective-C compiler proprietary rather than releasing it as open source despite it being based on FSF owned GCC.

Here's what the FSF had to day about it: "Consider GNU Objective C. NeXT initially wanted to make this front end proprietary; they proposed to release it as .o files, and let users link them with the rest of GCC, thinking this might be a way around the GPL's requirements. But our lawyer said that this would not evade the requirements, that it was not allowed. And so they made the Objective C front end free software."

The FSF were able to use their copyright ownership of GCC to make credible legal threats to sue Apple/NeXT over copyright violation unless they released the Objective-C code under GPL. Apple/NeXT backed down, and this is why it ended up under a GPL license.

When Apple developed their next language, Swift, they based it on LLVM, which let them keep it proprietary at first. Eventually they released it as open source, but retain the option of going back to a proprietary licence for future versions.

The main interest in LLVM comes from companies who don't want to write their own compiler from scratch, but want a proprietary compiler they can sell.

Faster Python: Mark Shannon, author of newly endorsed plan, speaks to The Register

thames

Re: Or...

I've done plenty of real time embedded work in interpreted Basic running on an Intel 8052AH 8 bit chip with 32kb of RAM and clocked at around 10 MHz. Loads of other people have done the same. The market for this was big enough that several companies were making dedicated versions of this chip with the Basic interpreter masked into ROM on the chip itself. It's a variant of the 8031, which was a top selling micro-controller CPU/SOC.

All of the major industrial control vendors (Siemens, AB, Schneider, etc.) sold them as modules for their control systems. Numerous single board computer vendors, whose market is entirely focused on embedded, sold them as well.

Loads of major industrial processes and equipment with more dedicated embedded control systems used this as part of their system, or as the complete thing.

Someone who genuinely knew more than a smattering about embedded control and had years of experience in the industry would know all this.

These days I would quite happily use a Raspberry Pi running a Python program in many types of real time embedded applications. People who have genuine experience in a wide range of embedded control applications would know that "real time" actually means "can meet the process deadlines".

And while we're on the topic of interpreted languages in real time applications, I should mention that I've used Python in PC based embedded industrial systems doing real time interaction with robots and measuring equipment.

I'll keep this short, but here's a list of other interpreted language systems that I've used or are familiar with which have seen widespread use in real time industrial control or test and measurement: Python, GWBasic, QBasic, HP Rocky Mountain Basic, UCSD Pascal, and a host of proprietary languages.

And I should add that until very recently most or all of the PLCs used to control factories around the world used interpreters.

I would suggest that you wind your neck in on this topic.

thames
Boffin

Re: Nary a natter of Nuitka?

Most of the "let's just add a JIT / static compiler to Python" projects end up with something that is significantly faster than CPython in certain types of application, but are on average somewhat slower than CPython in all others. That's why there are so many attempts to replace CPython which look promising on simple benchmarks to start with but which run out of steam once they get enough implemented for a wide range of real world programs.

The low level changes to data structures and other internals they are talking about in this project are the sort of thing needed to actually get major performance increases. These sorts of changes may also make it possible to get better performance from static and JIT compilers, not just a faster interpreter.

In other words, the real work isn't just gluing a JIT or C code translator onto the side. That has been tried many times and a JIT usually results in slower performance, not faster, as well as devouring huge amounts of RAM.

Pypy (Python written in Python) did all of this already and got better performance. Their description of how Pypy works is that they wrote a method of creating interpreters for any language, for which you get a specializing JIT compiler for no additional effort as a side effect. They also improved the internal data structures and memory layouts. However, they did all of this at the expense of being incompatible with most of the existing Python libraries which are written in C, which means a large proportion of the total number of the popular libraries out there.

The project being discussed here are apparently going to try to accomplish many of the things which Pypy did, but to do so in a way which doesn't break compatibility with existing C libraries.

One of the things which has made Python so successful is that unlike many languages they don't insist that all libraries be written in their own language (e.g. Python). VM based JITs are particularly prone to this pitfall.

Instead CPython is written in a way which makes it straightforward to use libraries written in C with much less call overhead than for many other languages. So if there is a demand for library to do something, it has been common to simply find an existing one written in C, write a Python binding for it, and you now have a Python library with minimal effort. Since C has become the lingua franca of the programming world, there is now a huge selection of very fast libraries available for Python with minimal effort. This is also why many experienced Python programmers don't see an issue with writing bits of an application in C when that would be of benefit.

However, anything Python related which breaks compatibility with a large proportion of those C libraries tends to be doomed to failure in the market, which is why Pypy hasn't been a huge success.

The people working on the project discussed in this article are well aware of this as they have been around the block multiple times on this subject, so they are no doubt well aware of what the issues are and will try to avoid them.

thames

Re: Making Python faster

If you're referring to the discussion near the end of the article, better algorithms will many times beat better compilers. Like you said "there is no compiler optimization for a programmer who doesn't know how to code". If the programming language makes it easier and faster to write your program then you can spend more time on the algorithm and get it right, resulting in a faster program.

There's no guarantee that your particular problem is the sort which better algorithms can be applied to, but there are ones which are. In the latter case, the "better programmer" is the one who understands algorithmic principles and can pick the best one rather than the one who has the greatest encyclopedic knowledge of a particular compiler's performance quirks.

Cloudflare launches campaign to ‘end the madness’ of CAPTCHAs

thames

Re: Hardware dongles?

The main problem with the few CAPTCHAS that I see is that they tend to be ambiguous street scenes from American suburbia. If you happen to live in American suburbs, like say the sort of people at US tech companies do, then they may make sense.

If you're not an American living in an American suburb, then you end up having to ask yourself "how would an American answer this question?" based on what you know about the US from American movies.

What they need is less ambiguous CAPTCHAS, but I suspect they are afraid that people will then use image recognition and AI to solve them.

Not that I think the Cloudflare proposal is a better solution. In fact I think it's worse.

thames
WTF?

What?

Either there is something missing from the description, or the idea is pointless. How does the presence of a YubiKey or the like tell you there is someone physically at the computer and it's not a bot? Anything a person can do with a YubiKey a bot can do.

And how is the "attestation is not uniquely linked to the user device" if they are using a device whose whole point of existence to uniquely identify itself? I'm not sure I'd want to have Cloudflare tracking me all over the Internet using a YubiKey they would make me buy.

I see CAPTCHAS very rarely, and only on sites which are especially concerned about not allowing bots to DDOS them. CAPTCHAS would be far less of a problem for me than buying a YubiKey and using it would be. Whatever problem they are trying to solve is one that I don't have.

Guido van Rossum aiming to make CPython 2x faster in 3.11

thames

The proof of the pudding will be in the eating.

From reading the references, it appears that they hope to get most of the performance increase from using an "adaptive specializing interpreter". Experiments with this have given a 25% to 50% increase in performance.

The real question of course will be what sort of applications will benefit from this. Most comparative language micro-benchmarks you see on the Internet were written to show off specific optimizations of particular compilers. If your code deviates too much from the micro-benchmark the performance vanishes.

One of the previous attempts at performance optimization in Python was called "Unladen Swallow" and involved adding an LLVM based JIT compiler to Python. It did fantastic on the common multi-language micro-benchmarks that many people like to use. However, tests in actual applications found that it made real world code slower.

The Unladen Swallow team then constructed a set of benchmarks based on a selection of large blocks of code used in a broad selection of common applications. Based on the results of this they decided the LLVM based JIT compiler was not a promising approach after all, and that standard JIT compilers were not a magical solution.

The only thing that survived from the Unladen Swallow project was the benchmark, which subsequent projects such as Pypy. The latter is another Python implementation, which uses a "specializing JIT compiler" and which does show real performance increases but sacrifices compatibility with C extensions (which is why it has seen limited use).

The way these sorts of projects tend to go is that we won't really know whether the ideas being pursued will actually work until it's done and tested on real world code. Overall though, it looks very interesting.

Nasdaq's 32-bit code can't handle Berkshire Hathaway's monster share price

thames

Re: python to the rescue

It's perhaps fortunate then that Python was created by a PhD mathematician. It uses Karatsuba arithmetic, which has been around since the early 1960s.

Python is also the only language that I am familiar with that does modulo operations in the mathematically correct manner in all cases. Every other language that I have tested (as well as spreadsheets) will produce mathematically incorrect answers under some circumstances because it was "faster" (because the CPUs produce incorrect answers). Python's creator insisted on mathematical correctness over performance in that case, so I'm pretty sure he did the same with other integer operations.

As a typical example, the sum of a large integer array in pure C is roughly 7 times faster than in Python (the exact amount in C will vary by integer size and type). In Python however the sum will never overflow while with C you are pretty much guaranteed to have an integer overflow even with a relatively "small" array.

You can do integer operations in Python without ever having to check for overflow. In typical languages using native integers people don't often check for overflows either, but then we get constant security notices about vulnerabilities due to unchecked integer overflows.

I can pretty much guaranty that the majority of programmers don't even know how to check for integer overflow correctly. There are many corner cases that will vary depending on operation and integer type. Several years ago I had reason to find all the integer overflow cases for all the integer types and operations, and I could not find a comprehensive single source for this. Not that it matters of course, since as I said the average programmer never checks anyway.

If you really need maximum numerical performance in Python, then you are almost certainly using a numerical library anyway, the most common ones are written in C or Fortran and operate on native integers or floating point numbers in native arrays. The libraries can take care of the details.

If however you just need to add two numbers together, then just do it in Python and you can be assured that the result will never overflow without you having to add multiple lines of code around each operation to prevent it. If you are just adding a couple of numbers the performance overhead is insignificant in practical terms.

With Python, pragmatism is the preferred course of action.

The quest for faster Python: Pyston returns to open source, Facebook releases Cinder, or should devs just use PyPy?

thames

Re: C and then some

The main driving force behind adding optional type hints into Python has nothing to do with compilers. It's there strictly because people flogging IDEs want it so their static analysis algorithms in their IDEs can work better at code completion. The original "unofficial" version was ported in from an IDE by the maker (I can't remember which one), and the current "official" one was created to get rid of that one with something better. I was a case of "well if you're going to do it anyway we might as well make sure that it gets done properly".

Cython have their own type declaration system, but it's oriented just towards giving the C compiler more information. It isn't part of the Python language itself though, and Cython isn't backwards compatible with Python. This is why you add the type declarations as part of the final optimization process after you are satisfied your Cython program is working correctly.

Personally I do however think that the Cython approach is the one most likely to produce the greatest performance improvements, although I wouldn't do it exactly the way they did. The main issue with performance is actually related to how dynamic the language is. This requires constant run time type checking and all the baggage that goes with that. It lets you do things in far fewer lines of code than most other languages, but it comes at a price. If you could selectively sacrifice some dynamic features on a method by method basis to automatically generate C extensions then you could have the best of both worlds. There are some third party add-ins such as Numba which do this for numerical processing, but I think that much better could be done if it were part of the language.

thames

Re: C and then some

The reason that C is such a popular language is because it does what it's intended to do well and it's ubiquitous. It's not what I would choose if I were designing a language, but it's there and I have reconciled myself to it. I wouldn't use it for everything, but then I don't believe there is any language which is best at everything, which is why I prefer to use Python for somethings and C for others, as well as other languages where they are appropriate.

As for x86, its assembly language is such a dog's breakfast I doubt that any reasonable language would fit it well. It's just layer upon layer of more "stuff" added in no logical or consistent fashion, SIMD operations being a good example.

thames
Boffin

C and then some

"Python performance is not always a problem, since library developers have followed Van Rossum's advice and written performance-critical code in C. "

If you absolutely, positively need performance above all else, then there aren't really many useful alternatives to C. The accepted practice is to write the the application in Python and to then benchmark it. If performance meets your targets, then you're done.

If it doesn't meet your targets then use the benchmarks to guide your optimizations. Optimize in Python first. For those bits that still don't meet your target, then write them is C.

Python is designed to interface well to C. The reason why CPython persists in remaining the dominant implementation is because it interfaces so well to C. If you are doing something typical in Python, chances are there is a widely used library which does it, and if performance is critical it will be written in C.

There is also increasing use of Cython for these sorts of optimizations. Cython is an extended version of Python which translates Python to C which you then compile as a C extension. It's a good answer for those who feel allergic to C.

Spent Chinese rocket stage set to make an uncontrolled return to Earth

thames

Re: Not uncommon

Yes, well you'll notice that McDowell's statement was qualified as "above 10 tonnes". Europe was intentionally dumping spent 8 ton rockets into the general area of Canada and Greenland, including roughly a ton of highly toxic hydrazine into the atmosphere as recently as 4 years ago, and may still be doing it for all I know. Protests from Canada to Europe were met with a "do I look like I give a ****?"

A trip to the dole queue: CEO of $2bn Bay Area tech biz says he was fired for taking LSD before company meeting

thames

Perhaps when he presented his "brilliant new plan" to the board one of them asked him "WTF? Are you on drugs or something?", to which he answered "why yes I am, how did you guess?"

thames
Pint

Re: Silly Con Valley tried this "microdosing" thing back in the '80s.

This is like being very drunk when a university student. You would have what you felt were the most brilliant and profound insights ever, and couldn't understand why other, more sober people couldn't see it when you tried to explain it to them.

When you woke up the next morning, as well as having a headache you would also have the realization that what you thought was brilliant the night before was either completely mundane or complete nonsense.

Stuxnet sibling theory surges after Iran says nuke facility shut down by electrical fault

thames

Re: What ?

The only large scale reactors that have actually been built that can use thorium that I am aware of are heavy water moderated ones. Reactors of this sort are used around the world as major power producers. The main attraction of these reactors is that they can use natural uranium, and so countries can have a secure fuel supply without needing a uranium enrichment plant. Their very high neutron efficiency theoretically allows them to use thorium as fuel, but all currently use uranium fuel, as uranium is cheap enough that it isn't worth the bother to use thorium.

Canada is the leader in this field, and most of these types of reactors are based on Canadian technology. This has included experimental work on thorium fuel. India however have done the most work on using thorium fuel, as their thorium reserves are much larger than their uranium reserves and so they have the most interest in this.

There are other theoretical reactor designs which might use thorium, but so far as I am aware none have actually been built and demonstrated.

Iran are considering building heavy water reactors. However, these are precisely the type of reactors that the US are the most upset over. So, I don't think that this is the sort of solution you imagine it to be.

thames

I'll take it with a grain of salt

Given that it's Iran, their facilities can fall over all by themselves without any outside help due to penny pinching on the actual implementation. I'll take this story with a grain of salt until and unless there's a bit more proof behind it.

Sitting idle while global chips fry: US car industry asks Biden to earmark cash for automotive semiconductors

thames

The fine print

Before the US government hands out taxpayer money to private companies, they might want to add in some terms and conditions that state the money will only go to to plants that are in places with:

  • a reliable supply of electricity (e.g. not in Texas),
  • are not in areas that are busy sinking beneath the waves (e.g. not in Florida or coastal Texas),
  • are not on top of an earthquake fault (pick your own examples),
  • are not in drought prone areas (much of the southwestern US),
  • and aren't in areas with routine tornadoes that knock down power lines and other essential infrastructure.
  • They also need to avoid areas with poor education systems, loopy fundamentalist religious governments, and other things which would make attracting and keeping a skilled workforce difficult.

Otherwise, what's the point?

Canonical: Flutter now 'the default choice for future desktop and mobile apps'

thames

Precisely, that is exactly the issue. The problem is that a lot of developers are writing for mobile first, with desktop as a secondary target. Ubuntu are bowing to the reality that if they want to get more application developers to port their apps to Linux then they need to offer development tools that those developers are familiar with on phones and tablets rather than expect them to learn all new ones for desktop.

Right now a lot of cross-platform stuff is being written in things like Electron, which is rather sub-optimal to say the least. Flutter sounds better than Electron, although I haven't tried it yet.

Ubuntu have been writing a new installer using Flutter to replace the two separate ones they use for Server and Desktop. They will do the same for some of their other system management GUI tools.

I plan on giving it a try some time soon before passing actual judgment on it.

Huawei CFO's legal eagles take HSBC to court in Hong Kong to obtain evidence against US extradition

thames

Re: Sanctions are messy, politics is worse.

The question of "double criminality" (whether there would have been a crime under Canadian law) was apparently always the weakest defence, even if that wasn't obvious to someone not familiar with how this works. The US framed the extradition request as "fraud" specifically to get around the issue of Canada not having sanctions in place (as we didn't pull out of the agreement with Europe like the US did).

At one point the judge basically just asked the crown (who are handling the extradition request from the US) whether he would prosecute this case as fraud, to which we shouldn't be surprised to hear that he said "yes".

Not an awful lot of thought seemed to go into it by the judge, and of course the Canadian court isn't concerned with the issue of whether any fraud actually occurred, just that "fraud" is a crime in Canada. The issue of whether Meng broke any US sanctions or whether she committed any fraud in doing simply isn't an issue that the court will have any interest in.

This is why her defence rests on procedural issues, as these are the only issues the court may have any actual interest in. This is why her strongest defences are whether there was abuse of process through the US providing false or misleading information to Canada or whether Canadian police and immigration abused their powers when questioning her and if they illegally handed information over to the US.

thames

Re: Sanctions are messy, politics is worse.

doublelayer said: "The courts in Canada and the U.S. have been trying to do that. They have already had to deal with questions of intent, and her lawyers have tried to prove that she didn't have ill intent. They have so far failed to do that and are therefore more likely to get the request denied by pointing to procedural problems."

Eh, no there has been no such thing addressed in court yet. The court in Canada is not concerned with her guilt or innocence, just whether the extradition request by the US meets the rather low bar required by the bilateral extradition treaty between the two countries. The US courts will decide on whether Meng has done anything wrong under US law, but they have not been involved yet.

The questions being addressed by the court in Canada include whether the US made their extradition request in bad faith, whether there is a political motivation behind the US request, and whether there was abuse of process on the part of Canadian immigration officials and police, perhaps at the request of their US counterparts.

The particular issue being addressed in Hong Kong is with regards to getting an original copy of the presentation which Meng gave to HSBC, as her defence lawyers have said the US have edited the version they presented to Canada in order to give a misleading impression to the court. If the document was indeed altered by the US then much of the US case falls apart.

As I said the Canadian court are not interested in Meng's actual guilt or innocence. The court are interested though in whether the US actually have enough of a case to justify an extradition. I should point out that going by previous US practice the charges that Meng might face in the US may bear little resemblance to the ones being used to request her extradition.

One of the relevant cases in this respect is with regards to an extradition from Canada to France in which the French legal authorities were later found to have made false representations to the Canadian court in order to gain the extradition of someone under false pretences. The person in question was later found not guilty in France due to lack of evidence, but only after spending years in jail.

One of the issues Canada was dealing with before the pandemic swept everything else off the table was with whether we needed to overhaul the extradition laws in Canada to deal with systematic abuse by supposed "allies" who were exploiting weaknesses in Canadian extradition law.

News opinion in Canada suggests that the strongest defences that Meng has with respect to extradition are the abuse of process by Canadian police and immigration officials acting at the behest of their US counterparts.

In addition to this there is a not unlikely possibility of a deal between Canada and the US to get the latter to drop their extradition request in return for the US getting something else they want from Canada (likely either trade, or given the current US government, some support for their environmental agenda). This has been floated multiple times by pundits in Canada as a way out of the current situation where Canada finds itself being used as a pawn in a US-China power struggle.

thames

A bit more detail on that mountie.

El Reg said: "The RCMP constable alleged to have been party to this has since left Canada, and is thus unable to provide witness testimony."

Last heard, said retired constable (actually a senior staff sergeant, Ben Chang) was in Macau and had hired a lawyer to represent him in an effort to avoid being forced to return to Canada to testify. He currently works as a senior security executive at a casino called Galaxy Macau.

It is considered highly unusual for a former Mountie to refuse to testify in a case.

The problem appears to be that an affidavit filed by Chang about the case was directly contradicted by the notes made at the time by another sergeant. The latter sergeant then reversed her testimony with a different account than was in her notes and agreed with Chang.

The CBSA (immigration) staff who grilled Meng for 3 hours about matters relating to the case claim that they did so because they happened to have read a Wikipedia article on her just before she arrived.

At issue is whether Chang conspired with the US FBI to arrange to question Meng without the usual legal protections, and to illegally pass on confidential information to the US, and then cover it up afterwards. Former colleagues of Chang say that it was not uncommon for the FBI to enlist the help of the RCMP on what they called "political cases" (their words, not mine) such as Meng's.

Here's two good articles from two of the most reputable news sources in Canada.

https://www.theglobeandmail.com/politics/article-ex-mountie-refusing-to-testify-in-huawei-extradition-hearing-is/

https://www.cbc.ca/news/canada/british-columbia/meng-wanzhou-rcmp-extradition-witness-1.5804308

Other news stories have said that the abuse of process arguments relating to the above are the most likely avenue to succeed for Meng's lawyers.

There have been repeated news stories that Ottawa are trying to get Washington to drop the case. A number of retired very senior politicians have also publicly called for the case to be dropped as not being in Canada's national interest. Next to the pandemic it is considered to be Canada's biggest headache.

Happy birthday, Python, you're 30 years old this week: Easy to learn, and the right tool at the right time

thames

Re: Ronacher's Rant

I normally apt install the packages that I use, including libraries. The distro maintainer takes care of ensuring that dependencies are consistent.

This way I don't have to manually keep track of which version of which third party library had an unpatched security bug which the author couldn't be bothered to fix because he fixed it in another release three versions later. Once you start doing your own mix-and-match of different library versions you take a huge security testing and maintenance burden upon yourself which few developers have the resources to do properly.

I do use projects that I pipped from Pypi and I have projects on Pypi, but when I use libraries from there I am very selective about which ones I use, and only use them when I have assured myself that they aren't in the distro repository.

thames

Re: Why do some people not like python's indentation=code block container

Mixing tabs and spaces in an ambiguous way in the same block of code in Python results in a compile error.

Here's an example of part of a Python compiler error message from doing this (the rest of the error message would also give you the line number and print out the line itself).

IndentationError: unindent does not match any outer indentation level

For example, if you have an 'if' statement where you indent one line with tab and the next line with 4 spaces, this will result in a compile error even if the two lines line up visually.

Consistency is enforced at the block level, so different blocks can have different indentation without any problems so far as the compiler is concerned.

Mixing tabs and spaces for indenting in Python will work, provided you are consistent within the same block of code.

However, mixing tabs and spaces for indenting in any language is not good practice and strongly discouraged by most people. Regardless of whatever method the compiler uses to identify code blocks, people use indentation as their primary means of identifying what lines of code go together with one another and inconsistent indentation can lead to someone misinterpreting what they have read.

thames

Ronacher's Rant

Armin Ronacher's complaints are all centred around his point of view as the developer of a web framework, rather than taking a broader view of how Python is used across many fields.

His complaint about async IO will be related to how it has essentially made most existing web frameworks (including Flask) obsolete in the eyes of many users who now want async versions. There are async frameworks, but established synchronous frameworks with a lot of legacy code are struggling with how to integrate the concepts into their existing code bases that weren't meant to accommodate it.

Python has had async since the 1990s as a library, but various web frameworks created their own version for one reason or another and then monkey patched Python to support it. As an aside, Python Twisted was the inspiration for NodeJS.

Van Rossum decided that the proliferation of third party async libraries was a problem because it limited the degree to which one could mix and match third party libraries. He therefore created a set of new official Python async mechanisms which the third parties were encouraged to use instead of their own custom ones. This has proven so popular that anyone who isn't "async" these days is perceived as being uncool, and so the complaints are coming out from the established non-async frameworks about async.

As for packaging, most Linux distros ship Python by default and most people that I am familiar with use their Linux distro packaging system by preference when installing packages. There's no problem with Pip from their perspective because they don't use it, and their sys admins don't want Pip used either.

The main people who complain about Pip tend to be web framework developers who want their users to Pip install all the bleeding edge versions of all possible dependencies instead of the supported ones in their distro. If you really insist on doing that though then you should be using containers.

The actual main reason however that Pip isn't part of the default Python package is because the Pip developers want you to always use the latest version of Pip.

If I had to actually pick a number one thing that more Python developers would like to see fixed with Python than any other it would be to ship a more modern default cross platform GUI package for it than Tkinter. That's not the sort of thing that concerns web developers such as Ronacher though, so it won't make his list of issues. I see that as a bigger problem than improving packaging (which I have no problems with) or anything else he listed.

I've no problems with Armin Ronacher by the way. I'm just pointing out that he has a particular viewpoint which is shaped by his personal experiences but which may not take into account the many other points of view which have to be taken together to get a reasonable compromise.

Supermicro spy chips, the sequel: It really, really happened, and with bad BIOS and more, insists Bloomberg

thames

Bloomberg were laughed at throughout the technical press in 2018, even if the general media took them at face value. The one person they were willing to cite as a "source" was interviewed by a security podcast and said that he didn't find Bloomberg's claims credible and that they had misrepresented what he had said. They were pretty comprehensively discredited back then, at least among the segment of the audience who would form Supermicro's customer base. There wasn't a lot of reason for Supermicro to sue then.

Since 2018 not a single solid piece of evidence has emerged to support Bloomberg's claims. Back in 2018 Bloomberg relied on second hand information from various unnamed officials in Washington and their one named source said he had been misrepresented and that he didn't believe Bloomberg's story.

This time their story is once again various unnamed officials in Washington, and their one named source just turns out to be someone who heard the same second hand story they did. In other words, it's just 2018 all over again with even less evidence this time.

Where's all these Supermicro boards that have hidden chips in them? If they're out there and been discovered, why aren't we hearing any first hand reports? If the US really had any of these boards they would be holding the biggest IT security press event of the decade with one, rather than feeding stories anonymously to compliant reporters. They're not shy about showing off Russian rootkits to the public, so where's the evidence in this case?

I'll believe it when I actually see some credible evidence in the hands of a neutral party who has a convincing chain of custody for it. Until then, I'll just put it down as Bloomberg being their usual less than credible selves again.

Huawei invokes 140-year-old law at England's High Court in latest bid to thwart CFO's US-Canada extradition

thames

Re: Nothing to hide

Tying it together with the reports of court proceedings in Canada, Meng's lawyers are arguing that the US government presented false, misleading, and incomplete evidence with regards to Meng's dealings with HSBC, but HSBC are required by the US to obey their orders (they are under a "cooperation" order from the US).

The US case revolves entirely around whether HSBC was aware of the full nature of Huawei's relationship with Skycom. If they were, then any US complaints have to be directed to HSBC. If they weren't, then this is what the fraud claim is based on.

The problem for the US case is that one of the important elements of their case is that they claimed that only junior HSBC employees were aware of Huawei's relationship with Skycom. Legal discovery in Vancouver has shown that these supposedly "junior employees" held the position of VP at HSBC.

The US also presented copies to the court of Meng's Powerpoint presentation with critical bits which might absolve her edited out. Some might call this tampering with evidence, but not being a lawyer I won't speculate on that.

What Meng's lawyers appear to be trying to do in this case is get their hands on original copies of HSBC's own documents relating to this before the US applied their creative writing skills to them.

I suppose that if it turns out that if the extradition case gets tossed out in Vancouver then the US may take out their frustration on HSBC, so the bank may have something lo lose in this after all.

One of the key witnesses in the Canadian police or customs (I can't recall which) engaged in Meng's arrest is currently hiding out in Macau, has hired his own lawyer, and is refusing to testify in the case. This certainly caused a few eyebrows to be raised among people that I know. I supposed it's rather odd that he feels safer in Macau than in Vancouver, but there have been quite a few odd things about all this.

Recent speculation in Canada has been that Ottawa may get Washington to agree to drop charges against Meng personally and to charge Huawei as a corporation instead in return for considerations on other bilateral diplomatic issues (oil, climate change, electricity, or something else that Biden wants from Canada). That way Canada can escape from being a pawn in a fight which has nothing to do with it.

There's a saying that goes along the lines of "when two elephants fight, the grass gets trampled". We'll have to see how this all goes I suppose.

Dev creeped out after he fired up Ubuntu VM on Azure, was immediately approached by Canonical sales rep

thames

Why do you think that Microsoft bought LinkedIn for $26 billion if it wasn't to mine your information and used it to sell products and services to you? The same goes for buying Github for $7.5 billion. Combine this with MS Azure and it gives them a huge amount of data about you which they then link together to find all the connections which you thought you had kept isolated.

The whole point of social media is to mine your personal information, link together data from multiple sources, and use this to target you for sales and marketing efforts. If that sort of thing doesn't appeal to someone, then they shouldn't have Facebook, LinkedIn, or Twitter accounts.

All grown up: Raspberry Pis running Ubuntu added to IoT patching service KernelCare

thames

Re: Wow

Ubuntu already have live kernel patching. They call it "Livepatch". When you install Ubuntu they ask you if you want to enable it. I never have because I don't see a personal need for it.

I don't think they have it for the Raspberry Pi yet however.

The killing of CentOS Linux: 'The CentOS board doesn't get to decide what Red Hat engineering teams do'

thames

Re: They still don't get the issue with "stream"

I have an open source project which I test on about a dozen different distros, including a couple of BSD variants. I have a fully automated test system which cycles through all the supported platforms. When I make a new release I then give a list of platforms which it was tested on, including which version.

One of those distros is Centos. If Centos Stream is just a rolling release then there's not much point in testing on it, as I can't then say that a specific version is tested and supported.

I have no intention of signing up for a developer account as I have no desire to jump through their hoops and become one of their minions just so I can do free work for them.

I'll keep the current Centos VM in the test system so long as it remains relevant, but once it gets too old compared to their current Red Hat release however, then I'll just drop it. As someone running an open source project, It's really not worth the effort to support a distro that anyone puts hurdles in front of.

Android 10 ported to homegrown multi-core RISC-V system-on-chip by Alibaba biz, source code released

thames

Worth making a note of.

There are two things which will drive wider adoption of RISC-V, both of which are mentioned in the article but which deserve to be emphasised.

One is NVIDIA's acquisition of ARM. If this does go ahead then they will be looking for multiple ways in which to squeeze ARM licensees until their pips squeak.

The other reason is that NVIDIA is an American company, and the American government enthusiastically weaponise the supply chain like no other country in their quest for commercial and diplomatic dominance. China will obviously be one country looking for an escape route from this, but they won't be the only ones. India already have a national RISC-V project running, which has the goal of ensuring national independence from foreign control. With the UK gone from the EU, there is also now nothing stopping the EU from following down the same route for the same reasons. There is already a very large ITAR-free market in Europe to allow European companies escape American political control, and this would be an extension of that.

One of the things which will make an ARM to RISC-V transition difficult is software. A RISC-V port of Android is very significant because it addresses this question head on. With this in place a very broad potential market becomes immediately available.

One of the road blocks to adoption will be Google's dominance of the Android app store and services system. However, in China the Android market is run by local vendors. There is nothing to stop them from offering RISC-V phones and providing a complete Android eco-system to go along with it. Therefore, I suspect that we will see large scale adoption of RISC-V in things such as phones and tablets in China before we see it anywhere else. With a large volume market to bring prices down, expansion into other applications can follow in the same path as ARM.

Overall this is a very interesting development and bears keeping an eye on.

UK competition watchdog calls for views on Nvidia's prospective $40bn acquisition of Brit chip designer Arm

thames

What Nvidia has in mind.

Nvidia's plans for ARM are clear if you think about what they are saying. The key phrase is here:

"We will maintain its open-licensing model and customer neutrality, serving customers in any industry, across the world, and further expand Arm's IP licensing portfolio with Nvidia's world-leading GPU and AI technology," he added.

They quite clearly intend to tie in licences for ARM CPU cores with licenses for Nvidia GPU cores. That is, in order to buy an ARM license you will also need to pay for an Nvidia GPU license, whether you use it or not. The ARM CPU licences can remain reasonable, but the associated GPU license will become increasingly eye-wateringly expensive and with ever increasing restrictions on what you can do with it in order to segment the market and extract the maximum possible revenue from each customer.

Want to build a phone with an ARM CPU? You will have to pay for an Nvidia GPU even if you use a different one. Want to build a server with GPU acceleration for AI? Your only GPU choice will be Nvidia, and you will have to sign a "partnership agreement" with Nvidia to license whatever AI software framework they are promoting that year. Want to build a CPU oriented towards cheap Raspberry Pi like devices? Sorry, but your product doesn't fit in with their market strategy and their salesmen have got other more lucrative things to do than to talk to you about it.

Having ARM owned by Nvidia will also bring it under US export controls, and the US have already tried to stop ARM licensing to Chinese firms. Trump may be gone, but that policy will survive into the future as there are simply too many interests in Washington who want to carry on a trade war with China by all possible means. Over the long term being used as cannon fodder in a war will sink ARM, as nobody outside of the US will be able to trust them any more and companies will drift towards other solutions which are not subject to the same risks. Britain will then lose the flagship of its high tech industry, as just another casualty in a foreign war.

I think that ARM needs to remain under UK control, and the UK should be identifying other companies in a similar position and acting to protect its national IP from situations such as this. Foreign investment should be welcomed, but commercial monopolisation (of the GPU market in this case) or foreign political control (US export laws) should be prohibited.

Where in the world is Jack Ma? Alibaba tycoon not seen since October after slamming Chinese government

thames
Big Brother

There are a lot of financial questions which need answers.

The Register's earlier article, as well as various other reports have raised real concerns about Ma's Ant Financial subsidiary, which is acting more like an unregulated bank than as a tech company.

What Ma is criticising the government about is the government stepping in and telling him that if he wants to operate a bank then he needs to apply for a banking license and operate under bank regulations, including meeting capital adequacy requirements.

There is genuine concern that if the authorities in China don't do something serious about the fin-tech industry in China, the end result may be a banking system implosion that makes the 2008 financial crisis look like a minor blip. I'm really not looking forward to "global financial crisis 2.0" in 2021 after what we've gone through in 2020.

Most or all of the big tech companies in China are also under investigation by the competition authorities for abuse of their position with respect to carving up the market between them by forcing small merchants to sign exclusive contracts.

I don't know where Jack Ma is at the moment, but from what I've read he has a lot of questions to answer about Ant's finances.

'Long-standing vulns' in 5G protocols open the door for attacks on smartphone users

thames

HTTP/2 ?

The most interesting bit is the focus on how 5G is supposedly vulnerable, when at least part of the problem seems to be 5G's dependence on HTTP/2, which is used for all sorts of things which matter a lot more than cell phones.

Wasn't HTTP/2 supposed to have been designed by the cream of the US tech industry? What's going on here, and why aren't the alarm bells being rung about HTTP/2 itself instead of 5G?

Either something has been left out of the story or there is a much bigger problem lurking in the background which has the potential to affect many sectors of the tech industry.

Uncle Sam throws Huawei CFO a bone in her extradition fight, but deal will require an admission of wrongdoing

thames

Re: The recent court procedings have been rather interesting.

I wouldn't jump to any conclusions about the outcome at this point. The legal standard for extradition seems to be much lower than for an actual criminal case, so the judge could decide to ignore issues that might cause the case to be thrown out in a criminal trial and sweep any police misdeeds under the carpet.

None the less, the American party requesting the extradition do not have reason to believe that everything is going their way on this.

thames

The recent court procedings have been rather interesting.

You have to wonder if the timing of this is related to two events. One is that things have not been looking very good for the crown (who are prosecuting the extradition on behalf of the Americans) in court lately. Key police witnesses are contradicting one another, with one police sergeant suddenly retaining his own lawyer and refusing to testify and another reversing the testimony given in her previous affidavit. This is in relation to the abuse of process proceedings (i.e. the RCMP allegedly conspiring with the US FBI to illegally access information).

The other thing that has come up recently is that now that Trump has lost the election, press reports have been suggesting that Ottawa will do a deal with new president Biden when he assumes power to get the US to drop charges against Meng personally, and to charge Huawei as a company instead. It is believed that Biden may be amenable to Ottawa's request in this instance in return for some other favour from Ottawa. Getting some sort of deal on this is a top priority in Ottawa so they can then go on to arranging the release of Spavour and Kovrig in China. This situation is probably second only to the pandemic in priorities in Ottawa.

This recent rumoured offer from US prosecutors may be an effort to salvage something from this case while they still can in case either of the two above situations go against them.

Redis becomes the most popular database on AWS as complex cloud application deployments surge

thames

Redis does a lot more than just act as a key-value store. There are numerous instructions for sorting, filtering, and manipulating data. There are message queues, sets, publish/subscribe channels, and even a built in Lua interpreter to run scripts.

If you are looking into using it for anything, it's worth reading the full documentation to find out all it can do. I'm working on a project which uses it, and I'm not aware of anything else that is similar.

Python swallows Java to become second-most popular programming language... according to this index

thames

Ruby has declined because it's main use was with the Rails web framework. As web applications have evolved and changed how they interact with servers and browsers, Rails has fallen out of favour and Ruby has fallen with it.

Python has a very broad range of uses and so is not dependent upon any particular one framework or application.

thames

Re: Meaningless

Python and Java fit into very similar application areas, so they're a good comparison. Both are most commonly used to write applications, although Python probably has a broader range of uses.

The biggest area where Java has widespread use that Python has much less use is in mobile applications, specifically for Android.

With Oracle doing a kamikaze attack on Google and Google now promoting other languages instead of Java, I won't be surprised to see an increasingly rapid decline in the use of Java in future, and its eventual relegation to the legacy enterprise business application niche. Java won't go away, just like Cobol hasn't gone away, but we can expect to see it continue to slide down the charts.

thames

Re: Popular?

Python is extensively used as an application program where speed and therefore cost of development is a primary criteria.

C is used extensively to write operating systems, databases, web servers, and the like.

In this sort of comparison C and Python don't really overlap much in use cases. It's a good idea to know both, and to know which one to use in which circumstances, and even how to use the two together. Python is designed to integrate well with C. They are complementary in that respect.

It would take me a while to recall all the programming languages that I have learned and used professionally. They would amount to two dozen at least, with each having some particular advantage in certain applications.

If I had to compress all of those languages down to as few as possible to do the same tasks, then C and Python would cover nearly all of those use cases.

thames

Re: Visual Basic 7.0

C-Python compiles to byte code and you will get compiler errors if you type in the wrong syntax. If you want more in depth static code checking than that, there are popular static analysis tools available for free.

thames

Re: Sin tax

Well you have pretty obviously never used Python because there is nothing preventing you from opening files that have hyphens in their name.

In fact I found your claim to be so bizarre that I had to try it myself just to reassure myself that I hadn't fallen into an alternate universe. I was able to open and read a file with a hyphenated name without any problems.

Your claim lacks credibility.

thames

Raymond Hettinger's explanation was more to the point, and he's one of the core developers. I'll expand a bit more on that.

First we have to admit that there is no one size fits all language. Different languages have their pros and cons in different application domains. That's why multiple languages exist and will continue to exist for the foreseeable future.

While there may be no one size fits all languages, Python happens to fit a lot of application domains. It's the "size medium" of computing languages. It's not the fastest, but it's not the slowest either. It's a good balance between speed of development and speed of execution.

The area that it's slow at is things involving tight loops on large arrays of basic data types, particularly integers.

However, as Hettinger states, in most of the sorts of applications where you do that sort of thing extensively, you would tend to be using libraries anyway, such as Numpy or other native libraries. One of the things that Python is really good at is interfacing with existing libraries written in languages such as 'C' used in numerical, scientific, artificial intelligence, and other applications. It's a deliberate design choice that prevents certain optimisations that could be made to the Python run-time itself, but it's considered to be worth it.

If those existing libraries don't suit your needs, there are options such as Cython or Numba which compile Python to C or C++, and thence to native code. At that point there's no difference between a C program and a Python program, because your Python program has been converted to C (or C++). However your development cycle is then slowed down to the pace of a C program, so it's considered advisable to only do this to the performance bottlenecks in your program, if it's necessary at all.

There are multiple implementations of Python, including ones which have JIT compilers which score well on benchmarks. The fact that people stick with the original C-Python system shows that most users don't see significant performance problems with respect to performance which aren't addressed by the tools and libraries which are well known to Python programmers. There's probably a reason why Python is the language of choice in many high performance computing applications.

And for markets such as web development, the alternatives to Python tend to be Javascript (NodeJS), Ruby, PHP, or Perl. I would take Python over any of those in an instant, as not only can you typically do the same job in fewer lines of code when using Python, Python is usually faster as well.

The actual problems that Python has are lack of penetration into mobile programming, and not having a better cross-platform GUI system integrated into the standard distribution than TKInter. The first problem mainly has to do with the difficulties in jumping through Apple's and Google's walled-garden hoops without a big corporate backer. The second problem is due to lack of interest as people would rather either using the native platform GUI (for which there are library bindings) or they prefer to use QT, which is a third party library. Python comes with the TKInter GUI toolkit, and there just isn't enough interest in replacing it to make the effort worth while.

Ubuntu 20.10 goes full Raspberry Pi, from desktop to micro clouds: Full fat desktop on a Pi is usable

thames

Re: Full fat desktop on a Pi is usable

I believe that people had Ubuntu desktop running on the Pi 3 as well, but it wasn't easy for non-technical people. What 20.10 is supposed to do is to make it more mainstream. I intend to give it a try.

thames

Re: But snap... ?

The primary use case for Snap is for proprietary game publishers who want to distribute a single binary and not have to update or recompile it when the underlying OS changes significantly. Games on Linux is not a huge market, but they do exist and Snap was designed with heavy input from the publishers as to what they needed to make it economically worth while for them.

thames

Re: Why bother?

I have several programs which I test on ARM in both 32 and 64 bit. The 32 bit version runs on Raspbian, and the 64 bit version runs on Ubuntu Server 20.04. The programs do a lot of very CPU intensive number crunching. The tests include a complete set of benchmarks which test all aspects.

One C program runs 33% faster when compiled to 64 bit compared to 32 bit, while the other runs 63% faster in 64 bit compared to 32 bit. I do make extensive use of SIMD operations however, and 64 bit NEON SIMD is twice the width of 32 bit SIMD.

One Python program runs 25% faster when running on 64 Python as compared to 32 bit, and another runs 27% faster in 64 bit.

The hardware is the exact same Raspberry Pi 3 in each case, I just swap the card with the OS image.

Not every use case will be like this. However, if you want the best possible CPU performance, then there are very measurable performance advantages to 64 bit as compared to 32 bit.

What a Hancock-up: Excel spreadsheet blunder blamed after England under-reports 16,000 COVID-19 cases

thames

Re: CSV?

The more important thing is that the data has to come from many different places and CSV may be the only lingua franca that exists for all of them. It's no good looking for the ideal solution if it would take a major overhaul of health IT system to accomplish it. By that time it would be too late and therefore pointless.

There's a CSV module as part of the standard library in Python. There are also modules which will write MS Excel compatible files if those are desired as the output. It should be possible to write a simple translation filter in a relatively short time. It could include whatever validation and error checking is considered desirable.

There are still unanswered questions such as why the CSV files had to be converted into the older XLS format rather than the newer one. There's a good chance that there is some other piece of software involved that will import XLS files but not newer formats or anything else useful. That is speculation, but it's a possible answer.

India shows off new home-grown CPU – but at 100MHz, 32-bit and 180nm, it’s a bit of a clunker

thames

Re: From linked Reg article:

India want to develop their high tech industry, and giving their designers practice in designing chips is important in developing the sort of talent pool required to create a self-sustaining hardware industry.

They have an over-arching development program called "Make in India", encompassing a wide range of sectors, everything including automobiles, mining, electronics, IT (of course), pharmaceuticals, space, electric power, and many others.

The policy for the electronic sector is called the "National Policy on Electronics (NPE) 2019", and follows on the NPE 2012. Here's a direct quote from their website describing the scope "The Electronics System Design & Manufacturing (ESDM) industry includes electronic hardware products and components relating to information technology (IT), office automation, telecom, consumer electronics, aviation, aerospace, defence, solar photovoltaic, nano electronics and medical electronics. The industry also includes design-related activities such as product designing, chip designing, Very Large-Scale Integration (VLSI), board designing and embedded systems."

The main competitor for their new RISC-V chip will of course be ARM, and especially the Taiwanese and South Korean chip fabs which seem to make everything for everyone. However, as I understand it they are less concerned at this point about developing a chip which is competitive in performance, and more concerned about developing skills and a pool of talent which can be applied to other commercial projects, even if the actual chips are made elsewhere.

There is a political and diplomatic aspect to all of this as well. Ultimately India want to establish themselves as a world power and don't want to be dependent upon anyone. The US are as least as much of a threat as China in this respect, perhaps even more so given the US proclivity for weaponising the supply chain through indirect control of foreign companies and financial payments systems. For example see India's participation in the BRICS payment system (in cooperation with China among others) which has the explicit goal of reducing dependence on US dollar clearing in response to this.

To make a long story short, India have detailed long term ambitions to make themselves a world power and to not be beholden to any other nation from east or west. This is an ambition that spans multiple decades. As part of this however they have picked electronics as a key industry to develop capability in.

What did they do – twist his Arm? Ex-Qualcomm senior veep joins SiFive as CEO, RISC-V PC for devs teased

thames

Re: RISC-V Raspberry Pi

A copy of the Rasberry Pi but with a RISC-V CPU instead of an ARM CPU would be great, but I don't know if off the shelf silicon exists for that. What made the Raspberry Pi practical was there was a chip already in existence which had a lot of stuff integrated into it, including graphics.

A lot of very difficult design work went into the Raspberry Pi to make it cheap, compact, and low powered. A lot of hard decisions about what to leave out and what to leave in had to be made to hit the price target. You can do anything if price is no object, it's when price is one of the design goals that the hard decisions have to be made. The Raspberry Pi was designed by setting a price target and then designing the hardware to fit into that target.

What the RISC-V backers have to do is start with a price target and see what they can fit into it. The Raspberry Pi has set the standard on that, so not many people are going to willing to spend $1000 on a RISC-V board just to expand the platform support for existing software.

thames

RISC-V Raspberry Pi

What they really need is a RISC-V Raspberry Pi equivalent, not a PC.

Getting people to find room and budget for a PC equivalent is going to be a lot more difficult than getting them to have a small cheap box they can SSH into from their existing PC, and to use it to test software on. A RISC-V PC isn't going to replace an existing x86 (or even ARM) PC because the software support won't be entirely there yet, so people aren't going to use it as their normal desktop PC. it will have to be in addition to their normal PC.

What they need is not so much dedicated full-time RISC-V developers as they need people who are running existing Linux software projects to test their software on RISC-V as another target along with their other testing. That will get people working on fixing platform specific bugs and doing platform specific optimisations which in turn will make RISC-V a more competitive platform overall.

I've got a Raspberry Pi that I use entirely for testing ARM versions of open source software that I write and support. I drive the testing process via bash over SSH and it's entirely automated along with the series of x86 VMs that run on the PC. I could easily add a RISC-V device to that, but I'm not going to if it costs too much money or takes up too much space. I would do it however if it was as cheap and unobtrusive as as Raspberry Pi, just for the sake of the experience.

Too many CPU developers are only thinking of selling expensive development systems to full time hardware developers and neglect the broader development that makes up the software which is required to make their hardware viable. This is a good part of why the CPU world has narrowed down to just x86-64 and ARM these days.