Sounds cool
Pity they decided to add hallucination to the feature set.
To make Python code run faster, you can now get performance optimization advice from the Scalene Python profiler and its associated chatbot – and mostly its recommendations help. A Python profiler provides statistical data about how Python code runs, such as the number of calls, the time spent executing functions, and so on. …
> "If you are writing code in Python but making heavy use of these native libraries, your code will scream," he said.
So the best way to get fast Python code.....is to have most of the code not written in Python....
Therefore, the fastest possible Python code will be..... when none of it is written in Python!
Just use a proper language instead. I really don't understand the attraction of Python.
That's a bit extreme, but I do avoid Python for anything large and interactive. Getting Python to run faster eventually gets into complicated tricks that ruin the ease of coding that it was originally selected for, and it still won't be very fast.
Python is popular to orchestrate GPU operations because the time spent in Python is small. There's also no multi-threading of GPU operations - you can only dream of having enough GPU memory for that.
For the things I do, the time it takes me to write the code is by far the biggest bottleneck, because the code is something that is typically run once to perform a specific task, and then the task is done. So if run time is 10 minutes, then spending 2 hours to get it down to 0.1s isn’t worth it.
Wall clock. That .1s is exponential for everything afterwards and every user. Not sure how much it matters here, but for things a human is waiting on, this could result in years of human life. Also, the lifeless, undead vampires on Wall Street don't like slow code either. I know for a fact that if you can speed up certain investor software by .1s you'd be paid 8 figures.
1) The python runtime environment may not have enough information available about the program ("introspection") to do that.
2) The runtime required to do the optimization may be well be a lot bigger than the actual runtime, so the whole is much slower.
3) Why spend all that extra effort (runtime) each time you run the program, when doing it once yields better runtime for all runs?
That said, there *may* be situations where what you suggest is useful.
PyPy does just that. It's a JIT written in Python and is able to reduce memory use, loop and function call times significantly. And these are the main areas where Python is slow.
But something that does profiling might also pick the bits of code that should either be cached or written to run only once.
That's what the Java VM does. It performs broad runtime tuning so it can apply, remove, and adjust optimizations that would never be safe at compile time. A really bloated app might have one or two CPU cores dedicated to that continuously so the other 30+ cores run faster. It's not a good fit to how Python works.