The title is too long.
> If the size of the problem is big enough it will still bring the SoA hardware to its knees.
So which will work sooner? Thinking to streamline the algorithm? Or just building a bigger hammer?
History of FFT is interesting. It burbled for like 150 years. Then all in a few years it got "fast". While the technique is still studied, it has not got a heap faster since 1965 era. Meanwhile our 'affordable' hardware has gotten a billion times faster.
There is a finite set of "greatly improvable" problems. FFT was one. Sorting, QuickSort, is another less dramatic "overnight" improvement.
We don't even know what is the lower bound on the complexity of fast Fourier transform algorithms.
Finding the short-path for a specific problem may suggest improvements in a few related problems, but not for "all" problems. OTOH faster/wider computer guts speeds ALL problems.
I've been putzing with computation for 60 years. I've lived through the "breakthroughs" and watched the 'inevitable' constant speed-up of computers. And the many headlines "The End of Moore's Law!!" Back in the 1960s the pundits pointed out that the speed of light limited performance to roughly a nano-second, because how could you build a ALU/CPU smaller than about a foot across?
(*) BTW, the size of your title, plus the customary "Re: ", brought the SotA Register hardware to its knees.