The proof of the pudding will be in the eating.
From reading the references, it appears that they hope to get most of the performance increase from using an "adaptive specializing interpreter". Experiments with this have given a 25% to 50% increase in performance.
The real question of course will be what sort of applications will benefit from this. Most comparative language micro-benchmarks you see on the Internet were written to show off specific optimizations of particular compilers. If your code deviates too much from the micro-benchmark the performance vanishes.
One of the previous attempts at performance optimization in Python was called "Unladen Swallow" and involved adding an LLVM based JIT compiler to Python. It did fantastic on the common multi-language micro-benchmarks that many people like to use. However, tests in actual applications found that it made real world code slower.
The Unladen Swallow team then constructed a set of benchmarks based on a selection of large blocks of code used in a broad selection of common applications. Based on the results of this they decided the LLVM based JIT compiler was not a promising approach after all, and that standard JIT compilers were not a magical solution.
The only thing that survived from the Unladen Swallow project was the benchmark, which subsequent projects such as Pypy. The latter is another Python implementation, which uses a "specializing JIT compiler" and which does show real performance increases but sacrifices compatibility with C extensions (which is why it has seen limited use).
The way these sorts of projects tend to go is that we won't really know whether the ideas being pursued will actually work until it's done and tested on real world code. Overall though, it looks very interesting.