3.14
And they didn't call it Pi-thon.
The Python team has released version 3.14, with big new features including free threading support, the ability to use concurrent interpreters, improved debugger support, and an opt-in new interpreter which improves performance by 3 to 5 percent. Open source banner Greg Kroah-Hartman explains the Cyber Resilience Act for open …
I downloaded the Windows version from the Python web site on Tuesday when the notice came out and installed it in a Windows VM and ran my standard tests for several projects that I maintain on it. So far I've had no problems.
I use the C API, but there are no issues with that, all tests passed normally.
However, I haven't experimented with the free threading mode, I've just run it with the defaults.
Python has had threading for a very long time, but it only runs on a single CPU core rather than spreading itself over multiple cores. A number of libraries make use of this (e.g. futures), and I've used them without any real problems.
Free threading is different, as it allows the use of threads with multiple cores. Free threading was always somewhat controversial with Python because free threading comes with a performance penalty that the majority of users didn't want to pay. For most people who are using Python for things like web applications or things like that, they want to be able to run multiple processes on multiple computers, or even multiple racks of computers, a use case which threading doesn't even attempt to address. So free threading carries a penalty which everyone pays but offers benefits to only a minority. However, it's here, but it's optional, so now the people who wanted it can demonstrate its usefulness.
My understanding with some of these big-effort optimizations is that they are proceeding as follows:
- Lay the groundwork in a release (behind a build switch - like free threading - or not). At this point, the need-to-haves are stability and not-much-worse performance. (i.e. don't -3% everyone, but maybe some slowdown is acceptable on not-as-used features with promises of big improvements later - I believe this is where subinterpreters are at)
- In future releases, re-visit the implementation and optimize the heck out of it.
Free threading, or removing-the-GIL (Global Interpreter Lock) has been on the Python radar for at least 15 years or so. I think that it, at some point at least, was a goal for Unladen Swallow https://stackoverflow.com/questions/714242/opinions-on-unladen-swallow : that question was asked in 2009.
Numerous people and projects have taken a stab at it, with the goal of getting their changes merged upstream, but a) the GIL is really a central feature for runtime stability and b) you're right, whacking everyone on the head with say -10% perf has been dead-on-arrival, even when optimal use cases (which aren't typically seen for most people's code) benefit to a possibly much greater amount.
Then another bundle of joy has been concerns people have with third party C library code interactions with a GIL-less Python. Numpy and Pandas for example, are fast because they are written in C. Those types of libraries also can't be durably broken by the changes.
Are you saying even if it is only beneficial for certain calls it slows everything else down?
I.e. it's on or off in total, you can't invoke it only where you want which you can currently. I.e., "import threading" threading.Thread etc I had assumed this doesn't penalise the rest of the code. Basically, I don't get the "it may be slower" aspect. I mean don't use it where it slows, use it where it helps.
It's on or off for the duration based on whether you used the "t" python when invoking your script/program.
My understanding is that *at present* free-threading is good for stuff that currently benefits from the pre-existing multi-processing support in Python. With free-threading it's faster to start what used to be child processes as they're now just threads, and instead of inter-process communication it's inter-thread. IF your code and libraries aren't broken by free-threading!
Any C threads started by a Python program have always been free to run on multiple cores, as long as they're not touching Python objects. As soon as they needed to talk to Python they needed to grab the GIL and everything became single-core... until now because (if you're brave) they can run free-threaded.
"Just threads." The overhead for atomicity and synchronisation between threads is higher than what the OS gives you with processes for free. (Except Windows, because it copies on fork.)
There's a reason for the GIL and why it is taking ages to replace. Meanwhile, child processes take advantage of parallel hardware with no extra effort. (Distributed systems may require MPI instead, depending.)
Arguably, writing to shared memory is more efficient than pipes, except: to avoid race conditions you need locks anyway, and if you lock the entire shared memory, you're back to the GIL. And if you partition it into smaller atoms, you have more overhead coordinating between those.
Child processes are usually cheaper, faster, safer, and easier than threads. For some things, threads can be faster, but it's always extra effort, it's never "just threads".
It's a build option, so you actually would have a Python standard or a Python free-threaded executable depending on what you build / download.
https://docs.python.org/3/howto/free-threading-python.html
Say your script prints Hello World 1000x - that's a simple sequential program, so you pay a small penalty but have no gain from free threading. I wouldn't generalize but blog.miguelgrinberg.com/post/python-3-14-is-here-how-fast-is-it claims about a 10% speed penalty doing a Fibonacci sequence (comparing std 3.14 to free-threaded 3.14). Then again he states that Python 3.14 std beats 3.13 by 27% so plenty of goodies in 3.14 and 3.14 free threaded is still faster than 3.13 standard. Benchmarks... use a truckload of grains of salt here.
Say your script downloads wikipedia articles for all 195+ countries: that's clearly parallelized and you could use free-threading (you could also use asyncio or multiprocessing) which will benefit. But since you have network IO as the biggest deal, asyncio or multiprocessing probably work just as well, minus any concerns about free-threading breaking things (really not a risk here).
Say you compute weather cells in Python in a 10x10 grid. That's also parallelizable but it's CPU bound. With free-threading you can take advantage of your multiple CPUs with threads. You could use those CPUs with mutli-processing but possibly there are benefits to inter-thread vs inter-process communication (I don't use any of these modes much). Asyncio won't help as your code is not waiting on external resource responses. Regular (not-free threading) Python threads will not help at all as all this will be bound to one CPU, being only Python code, so each thread will take turns on it.
Again, the weather cell computation example, but while some of the work is in Python, some is also farmed off to Pandas, a very popular C-based extension. Yes, you can use free-threading, because Pandas now officially supports free-threading.
Last, your program takes the weather cell computations and runs part of them in C extensions your team has written. Now you are in the risk zone for the new free-threading Python. It's there behind a build switch for just this reason. Is your code impacted? Or not? Does it somehow make assumptions guaranteed by the GIL (Global Interpreter Lock) which is gone in free threading. Hmmm....
And if your weather cell example consisted of limited configuration in Python and then hiving off the real work to Pandas or your custom C extension, then free-threading doesn't really help because those C extensions will use your multiple cores just fine in standard Python 314 or earlier.
It should also be noted that the acceptance of the free-threading changes by the Python steering council is, so far, conditional. AFAIK, they recently accepted the PEP (that's the Python Enhancement Proposal), but on the express condition that they could yank the plug if it caused any undue problems.
On paper it's a great idea and one that has been long sought after: being bound to one CPU is very limiting on today's silicon. But they are not willing to risk the overall ecosystem until it has fully proved it doesn't have undesirable effects. That's why it's not switched on by default.
Acceptance as the default Python mode is deferred to a future PEP's decision.
https://peps.python.org/pep-0779/
There was a python article in today's newspaper (yes, paper newspaper). In Maine USA. Sweetheart asked "Software or...?"
This baby ball python was found in an Orrington yard.
image: https://i0.wp.com/bdn-data.s3.amazonaws.com/uploads/2025/10/Untitled-design-82.jpg?resize=1024%2C683&ssl=1
"...lucky to be found because the species, ...can only survive temperatures above 60 degrees {F}" ---- It is 54F at the moment.
And the JIT speed issue is just very weird given PyPy's much better perf here. Even guile (Scheme) and hoot see substantial JIT uplift with dynamic typing (thanks in no small part to Andy Wingo's unique magical touch, inspired by V8 it seems) ... if I were MIT, I think I'd reconsider that 6.001 decision to switch to Python (for the robots)!
And then there's MOJO's 400x speedup over (naive) Python ...
IMPORTANT: the following rant applies to all the other scripting languages; Python is singled out 'cos it is the biggest and 'cos TFA about Python illustrates the point nicely.[0]
The article shows the Python launcher being used, to allow the user the choice of which version to use, including being able to select a copy built with free threading enabled.
Which is great, if you happen to be in the right position of (not inconsiderable) knowledge, for example if you are the author of the script. For you, Python is like this.
But heaven help the schmuck who is simply trying to run some program to do something useful: hang on, I've got to run this script using this command when I'm in my PC, but Jim's PC has a different random collection of Python environments so it works better to use *this* command, wonder what is the best one to put into the documentation for Fred going onsite tomorrow? We're dealing with real-world Python. Who knows which copy has ended up first on PATH? How are your virtenv's today?
Why is it left up to the end-user to *have* to know, for each and every script, what magic invocation is going to work this time? Sure, give them - give everyone - the ability to make such a selection, but only when they are ready for it! The author knows if this code needs free threading, or uses features from release 3.2 or later, or even if this is old Python 2 code which still works fine[1], so why not provide a way for that knowledge to go into the code?
"The solution", as provided by Python (and other scripting languages) is the hash bang line at the start. Assuming that the environment understands these (as it isn't really Python doing it, is it; we're just piggybacking on the shell) then, yay, you've just hardcoded assumptions about where the runtime is sitting: is it #!/bin/python3 or #!/opt/bin/python - or even #!~/bin/python ? Forget about OS portability! And even locations for one OS are changing over time: Linux (well, some portion of it) is trying to rearrange and merge the morass of directories inherited from the Olden Days of small drives and scattered partitions...
How about putting requirements into the code, instead of a hash bang line, then let the "python" command (not a special, extra, launcher name like "py" but, from the p.o.v. of the user, the real thing) take a gander and do the very best thing it can. Otherwise generate a decent error message, explaining that something extra needs to installed and why. Ok, in reality this means that "python" would be the launcher and would know about proper runtimes in executables with tedious names like "python_v3.2.17_freethreaded-plus-my-test-optimisations-v0.03_build532685_svn6732".
Make this launcher an entirely separate project, so that it can be applied to any scripting language (with appropriate tweaks to e.g. handle different comment notations so it can pick out its metadata and work without necessarily changing the scripting engine at all).
[0] yes, yes I should be off posting about this on the Python forums, on the Lua forums and everywhere else; but I reckon I can get away with being more ranty here whilst I polish up a nice polite way to broach it to TPTB.
[1] or you are looking for regressions and have checked out of version control the first copy of the company product...
Some things helpful to know:
1. shebangs carry over from Unix where it is used to tell the kernel which interpreter an executable text file should use, this is the reason for its general inclusion in scripts.
2. "#!/usr/bin/env python" - current python command in environment without worrying about its location. "#!/usr/bin/env python3.6" - same for 3.6 if available. python3 is available as the interpreter for a python 3 version, python2 existed (but of course you're not using it any more).
3. Installers can interpolate the python path on install for this very reason, with the installed script now pointing to the appropriate local python with little to no interaction from the user.
4. Launchers in desktops can bypass all of this if necessary by simply invoking the correct interpreter path for the script directly, again something an installer can handle.
To some degree it's messy, but compare the situation where you are building with different dynamic libraries on a system (which is the equivalent of having multiple python environments). It's also much more of a developer-facing problem than a user facing one, as users use installers. Your IDE probably allows selection of the appropriate python interpreter for your project.
> 1. shebangs carry over from Unix...
Absolutely, it was a Neat Hack back in the day: have the program loader look for a magic number then modify the process control structure in response. My "problem" with it is that it is a rather crude mechanism when faced with all the variants of scripting languages.
> 2. "#!/usr/bin/env python" - current python command in environment without worrying about its location.
True, and that passes the buck down to the contents of PATH (so your hash-banged script is in the same situation as your running "python my_script.py"); which irons out one inconsistency. And thankfully env was (eventually) added into Posix, so it is available pretty often (although I still think of, what was it, 2017/2018 when it was added, as "pretty recent"). Although - now, this may just be me, but I really prefer to find *any* way to not keep adding things into the general PATH as things can get pretty messy: every program installed tries to get its bin added to the PATH, so now the earlier programs may be calling a newer executable that just happens to have the same name. All fine if absolutely everyone plays nice, but one bad apple and *poof*.
> 4. Launchers in desktops can bypass all of this if necessary by simply invoking the correct interpreter path for the script directly
That is very much what I'd like to see - as a general mechanism that can be utilised by any scripting language - and without the language interpreter itself having to do any hard work (other than allow the metadata line to exist). If you have installed a Python exe into Windows then that program - well, the version that is provided by python.org at least - has special-case code for handling a subset of shebang lines. Something that is divorced from the actual job of implementing the language. The very fact that there is/has been a distinct Python Launcher is an indication that this is an - interesting - problem and one that is shared by every scripting system.
> 3. Installers can interpolate the python path on ... something an installer can handle.
Assuming that *all* of your installers do actually do that...
Just to repeat my very first line: this is NOT purely a Python-specific issue and, in fact, the more times that anyone can come up with a "well,Python has done xxxx ever since yyyy" or "well, if you always use this Python utility to create your installer it will do it for you" just indicates that there is more stuff that can be separated out into distinct utilities, totally divorced from any issues around implementing a scripting language interpreter, such that those utilities can be used for Python, Lua, Rexx, Shrimp, Lisp, Smalltalk, Basic, Fly, Asymptote, Box, Fennel, Lilypond, Gri, Perl, PHP (boy, does that need correct version matching!), PIC, the macro language I wrote a few years ago..
And, of course, Intercal.
I think the limiting factor is that you can't really know what's around. I could write a script in any language of my choosing that searches your machine for the dependencies I want, chooses between the available versions for the optimal set, and then runs with those, but if you put them in a weird place or just don't have them, that script is still going to fail. The solution you're describing sounds a lot like that script, but a more professional version that is better at guessing where different versions of things might be. If you haven't updated your Python version for a while and only have 3.7 but I need at least 3.10, the launcher won't fix it, but my program can by checking against sys.version_info at startup before it does anything too new as long as it's acceptable for the user to find or install the later version when they are told the script won't run.
For complex projects, this is mostly solved by either putting it in a package which has dependency requirements (the Linux approach mostly) or just packaging all my dependencies and bringing them along with me (the classic Windows and Mac OS approach and various packaged application formats for Linux that don't require as many different distro-specific packages). Neither approach is really that hard for a developer to do, but it takes some effort and a lot of small scripts don't get very much effort in the areas of documentation or packaging.