And yet we still use Python 2..
With the arrival of 2020, the Python Clock has stopped ticking, marking the end of development for the Python 2 programming language. Nevertheless, Python 2 should still be shambling about through April at least, when the final Python 2.7 release (v2.7.18) is slated for delivery. And it's likely to linger for years to come in …
And why not, if you know what you're doing and particularly if it's not "internet-facing". Lots of internal systems will continue to run happily for years with Python 2.
New projects, however, should be done using Python 3 unless you have a good reason not to do so: I know of a couple of libraries that are Python 2 only, largely because the developers have not had the time to port them to Python 3. I suspect that, where the need is great enough, other people will take on the work either, where the source is available, or a clone where this isn't the case. This has happened in the past: Pillow replaced PIL after it was effectively abandoned.
but future macOS releases won't ship with any scripting language by default
Apple's justification for this is apparently that such languages were required for "legacy applications". If so, I'm not sure what PHP was being used for. Apple was infamous for releasing doctored versions of these languages, like Python and not maintaining them between major OS releases, which is why the PSF maintained its owned versions of Python for MacOS, and developers used MacPorts or Homebrew to manage them. Anyway, it's basically just another brick in the wall around the garden. I suppose we can expect them to remove support for command line tools in future versions of XCode. :-/
Red Hat are still shipping 8 with Python, it's just that they have separated the "system Python" version from the "user Python" version.
Red Hat's system tools rely on Python, so they still need to ship it. However, when you type "python" or "python3", you will get whatever version you have designated to be the user version, rather than the version which Red Hat uses for system tools. To get a user version, you need to install it via "yum".
What this means is that you can upgrade the version of Python which your applications use without affecting the version which the system tools use. That tends to matter to Red Hat, as they have very long life cycles.
When the system tools used 2.7 and newer applications used 3.x, that didn't matter, as both 2 and 3 could coexist without any problems because one is address as "python" and the other as "python3". The problem comes when Red Hat want to pin the version the system uses at say 3.5 but the user wants 3.8 for applications and both versions are known as "python3". This change lets two versions of 3.x coexist without conflicts.
"The problem comes when Red Hat want to pin the version the system uses at say 3.5 but the user wants 3.8 for applications and both versions are known as "python3". This change lets two versions of 3.x coexist without conflicts."
And so it continues ...
You know something is broken when the fix for it is 'give it it's own private environment with all its dependencies packaged in so it doesn't break everything else'
the version used by system tools will be supported for the lifetime of the distribution, irrespective if it is used by system tools or by user apps, but if user wants to make `python3` point to newer version, they can, without fear of breaking anything in the system
and things may break because of deliberate incompatible changes _in python_ like the recent promotion of `async` to a keyword
When you try to explain that to Linux folk, they act like MS users who don't understand that regular rebooting of computers is not some sort of natural law.
Yet these very folk snipe about windows programs using the same registry as the system does, and programs shoving files into "random" directories.
Presumably what this actually means is that you don't get scripting tools unless you install Xcode (or the Xcode command-line bits). I don't think that's particularly horrifying. Once I would have: when whichever SunOS it was stopped shipping with a C compiler by default I remember being up in arms about it, but, well, that was a long time ago.
...that will have legacy Python code hiding in the bushes. CERN uses Python 2.7 at the heart of their Atlas@home distributed computing project. I suspect that there are many, many other scientific Python 2.7 implementations that will be around for decades to come. It will be like Fortran and COBOL as it goes on living a half-life that would make most radioisotopes envious.
The differences between Python3 and Python2 are pretty minimal overall in terms of language design. More like lets get rid of the bad decisions we made 30 years ago so we can move forward.
I think PHP changes much more rapidly with much greater impact from version to version.
I think the reason people stuck with Python2 so long is because they had zero incentive to look at Python3, and so they kept on the same path. To them, Python3 was much like Perl6, some distant far off thing that will never come about. So its good that the Python foundation finally set a firm date to decommission python2 and promote python3 as the future.
Features from version 3 were backported into the later versions of 2 in order to make 2 more forward compatible with 3. Version 2.7 was a sort of half-way house between 2 and 3. That was intended to allow for easier porting of applications.
The main practical issue that people had was with unicode, as 3's fully integrated unicode support turned up a lot of latent user application bugs which had issues with handling unicode. I ran into some of these with version 2, and they could be triggered by something as simple as a local change in the OS the application was running on.
The number one issue driving Python 3's changes was fixing the unicode problems once and for all by making the language and libraries unicode from end to end. There wasn't any way of doing this while preserving perfect backwards compatibility, but now that the nettle has been grasped Python can put that behind them.
> fully integrated unicode support turned up a lot of latent user application bugs
Can I have a "oh fuck yeah" upvote button please?
However, the "EVERYTHING'S UNICODE" mindset does make it a lot more difficult to write out bytes to things like image files and embedded devices.
I haven't really found that. For everything you could do with strings, you can do the same things with byte arrays. Just remember to open all your files as binary and prepend b to your strings (s.split(b"\n")) and you won't encounter any unicode problems. There are many very annoying things with handling unicode strings in python, but I view this as the fault of our many text encodings rather than the language itself. It's not the PSF's fault that there are at least five commonly used methods of encoding unicode characters and at least seven non-unicode encoding tables used on typical systems.
What you describe as being difficult with handing bytes is the situation in version 2. Strings could be 8 bit ASCII bytes or they could be unicode, depending on what any particular function decided to return, which in turn could depend on your OS locale settings, the phase of the moon, or any number of other factors. A "string" of bytes could change into a string of unicode unexpectedly if you weren't careful.
In version 3, strings were made to be purely unicode all the time, and new "bytes" and "bytearray" types were added which were purely intended as arrays of raw bytes and not to be interpreted as unicode strings.
There was also however added a full set of byte formatting and manipulation functions added which let you do string-like operations on bytes and bytearrays (searching, replacing, justifying, padding, ASCII case changes, formatting, etc.). So now you can write out bytes to image files and the like an be sure that what you are getting is bytes.
So now with version 3 strings are text and are unicode all the type, while bytes are bytes and not to be confused with text strings.
This change ironically is what the biggest complaints were about, mainly originating with people who only ever used 8 bit American ASCII and were not happy about having to change their code to better accommodate people who wanted to use funny looking accented characters as part of their alphabet.
In version 2 I was initially very uncomfortable using the "struct" module in the standard library (used to format packed binary data - very useful, have a look at it) because of it's use of strings when it wasn't clear when a string was an array of bytes or when it was a sequence of unicode characters. With version 3 it uses bytes and bytearrays and there can be no confusion with inadvertent unicode.
Working with bytes isn't any more difficult in Python 3 than it was with Python 2 except it's now more explicit and you can no longer even try to encode already encoded strings. But allowing us to use the u-prefix in 3.3 did make it a lot easier to make the intent clearer where code was likely to use both unicode and bytes a lot.
More like lets get rid of the bad decisions we made 30 years ago so we can move forward.
If that was what they were doing I'd be happy. But, you know, when the 'fix' for the scoping deficiencies in Python is to add the
nonlocal statement to the already desperate hack of the
global statement, well it's clear that all is just lost. The whole fucking thing should be shovelled into a pit, soaked in petrol and burnt.
And all of this because having to distinguish between a statement which binds (declares) a variable and assignment, like hundreds and hundreds of existing languages do, was too hard. I mean, I can actually live with Python's braindead scoping misdesign: but living with it in a language whose acolytes preach how clever it is: fuck that.
About the only python project I care enough about (I use it so I care) was written in python2 because the code ran faster. I know this because I coded to support an easy change over so I will probably migrate it over when I get chance.
I am impressed with their running two separate versions for so long.
This post has been deleted by its author
It's hard to see this split between Python 2 and 3 as anything other than a boneheaded decision. What other language does such a thing, other than maybe Java with JVM-specific features so that people are still using JVM 5 and bleedin' 6 in 2020?
As a primarily C++ developer (with Ada inclinations), it hurts my head to think of the disaster it'd be if for example the C++11 standard or Ada 2012 standard had broken compatibility with existing code in some fashion. Entire codebases split into 'old' and 'new', managers suffering aneurysms from having to budget in unexpected 'porting' costs for codebases that are 1M LoC and date back to some Pascal code that ran on a System/370 mainframe back in the 1980s.
Might one take this as a sign that Python isn't quite the enterprise-ready language it's often made out to be? At least the kind of enterprises where 20-30 year lifespans of code with guaranteed maintenance and feature updates are hammered into service contracts that the Devil Himself would get nervous from.
Pissing off a horde of OSS coders probably doesn't have quite the same impact, I guess.
I think it's industry focus in my opinion.
The companies selling C and Ada compilers (say, to Aviation or Medical), are not willing to risk the losses they would incur should they make such a stupid decision. Those same companies (Wind River, Adacore, Green Hills, etc) sit on the design boards and guide the development of the languages.
Not sure any company is paying the same (or really anything) for Python, which gives the community carte blanche over its development. It's a Catch-22. No one is going to spend big $$$ on a tool that can't nail-down it's standards, but then there's no reason to nail-down any standards if you're essential free to do what you want with it.
In other words, the people making changes to C/C++ and Ada have to consider what their customers want out of it. The people making changes to Python are only considering what they want out of it.
Possibly because Python is heavily used by very beginners or web developers.
Beginners don't know any different and probably think all languages die and a new one takes its place every few years.
Web developers are a pragmatic bunch. They don't dwell and just jump onto the latest tech that fulfils their contract.
But in general, yes I agree, I would be pretty nervous if C++ changed too much. This recent trend of async / lambda *everything* is a bit too much change for my liking but that is due to new features that some developers over-consume rather than deprecations. I think the worst we have is a few C functions have been deprecated but the preprocessor kind of solves that in a quick and dirty fashion.
While writing this I'm currently skiving off from from a project which involves a set of C libraries for Python, involving both Python and C code. It runs on multiple operating systems, with multiple chip architectures, in both 32 and 64 bit, using multiple C compilers.
The Python code runs without change on all platforms, CPU architectures, bit sizes, and C compilers.
The C code on the other hand must be tested, debugged, and tweaked to run on all of them. Word sizes vary, operators have subtle differences in how they behave depending upon CPU architecture, compilers must be placated as one will issue warnings for code that another has no problem with, and obscure documents must be consulted to find the edge cases where the compiler authors simply state that the compiler will generate invalid machine code without warning and must be worked around manually.
I've written software over the decades using Fortran, Pascal, Modula-2, various flavours of Basic, various 4GL languages, assembly language, Java, C, and a host of other languages that I can't be bothered to recall. Given a choice, if I had to write software that I had to be sure would work correctly, I would take Python over any other choice, provided the application was suitable for it (there's no one size fits all language).
I was very sceptical about Python before I first tried it 10 years ago, but I put my preconceptions aside and give it a good try and was pleasantly surprised. In my experience the people who complain about Python mainly tend to be people who have little or no real world experience with it.
> anything other than a boneheaded decision
Actually, the change was to fix boneheaded design decisions in the original language, and Guido Van Rossum basically stated it as such. There wasn't a good way to do it w/o breaking backward compatibility, so as another poster said, they "grasped that nettle and moved forward"
It's not only Python that does this. Perl and PHP both have made breaking changes between versions, in fact quite a bit more frequently than Python did. Nobody likes breaking changes, and I'm sure the PSF would have chosen not to if they could get around it. However, by forcing the tricky unicode handling parts to be made clear, they unearthed many potential future bugs, which would have been found later on in less pleasant circumstances had they not broken that. For the same reason, if you designed a C-like language today, you could probably suggest some improvements to it that can't be implemented in C itself to preserve backward compatibility. Not everything about the update was necessarily a good thing, but I think there is sufficient merit that the language shouldn't be counted out.
"As a primarily C++ developer (with Ada inclinations), it hurts my head to think of the disaster it'd be if for example the C++11 standard or Ada 2012 standard had broken compatibility with existing code in some fashion. Entire codebases split into 'old' and 'new', managers suffering aneurysms from having to budget in unexpected 'porting' costs for codebases that are 1M LoC and date back to some Pascal code that ran on a System/370 mainframe back in the 1980s."
All of this ^^, definitely. I got similar programming background and I don't recall any non freeware language (here, let's emphasis "language" and not compilation systems like gcc), and apart from the very braindead Fortan 66 vs. 77 (or even before) that introduced reserved keywords to lift off the ambiguities there were before that, I can't recall any pro language that broke compatibility ...
At contrary, it seems, those days, python N+1 or Perl N+1 is basically incompatible with N and everyone seems pleased about it ...
This post has been deleted by a moderator
The world needed a Python 3 as much as it needed a Perl 6. Basically breaking backward compatibility as a matter of principle rather than any logical technical reasons. Oh, but backward compatibility isn't ALWAYS broken, you'll just have to run it in both versions and see which crashes less. And wither now the world of still perfectly good but unmaintained Python 2 code? I especially love trying to run that stuff when it was written under the assumption that it could always be run by /usr/bin/python. What a waste.
You think that's bad?
If you really want to talk about breaking cross-compatibility needlessly....
Try dealing with arsehats that write shell scripts that start #!/bin/bash yet don't use bashisms, or even worse, scripts that start #!/bin/sh and do use (generally unnecessary) bashisms.
That's just one of many examples where Linux (Linux/GNU distributions, not the kernel) is a plague on the unix world.
I won't even go into the alsa/pulseaudio shite that came about because someone couldn't fix their broken OSS implementation...
"The world needed a Python 3 as much as it needed a Perl 6."
I think that probably depends on whether you need unicode. I'm fine with ASCII, so I'm sticking with Python 2.7 -- possibly forever. But if Python was going to break the language in order to better handle East Asian characters and such, it was probably a good idea to simultaneously fix all the other language stuff that turned out to be a dubious idea. Things like print being a statement rather than just another function meaning that you need to do some trickery to print from a list comprehension in Python 2.
"When Python 3 was released in late 2008, it was not backward-compatible with Python 2. The plan since then has been to sunset v2 once enough developers made the transition to v3..."
The same has been the case with PHP. It's absolutely crazy to make new versions of a language unable to maintain older code without a huge burden of rewrite. By all means add improvements, but deprecate rather than eliminating their predecessors. Maintaining the language should take second place to maintaining code written in the language, simply because the code it what its all for.
Biting the hand that feeds IT © 1998–2021