"True, but there's still overhead even with JIT. Eg: boundary checking, garbage collection."
It depends wheather or not those ease-of-use/reliability improving features outweigh the performance hit having them causes. For some the performance hit will be too great, for others it will be acceptable and outweighed the amount of time they will spend hunting down an intermittent bug that causes array accesses to go haywire.
""IOW any time spent on "tricky" coding up front is probably in the wrong place"
Sometimes, not always. Obviously if a program is I/O bound then no amount of fancy coding is going to significatnly speed it up, but if its CPU bound then you can work with the compiler and profiler to tightly optimise the relevant part of the code."
I was actually talking about people who alter their coding plan based on what they think will be slow, without profiling their code first.
The experience of developers from DE Knuth to Steve Connell is that the bits they thought would be slow, when they did profile them, turned out not to be. The bottleneck was never where they expected it to be. It was somewhere else.
What you're talking about is re-coding after you've found the hot spots with a profiler, which is best practice. Implement as simply as possible (to make sure the results are correct) to begin with and then optimze those parts that will make a serious difference, the proverbial 80/20 rule (or in some of Knuths work the 95/5 rule IE most of the run time was swallowed by just 5% of the code).
BTW there are many places for a program to have bottlenecks. Looking at various programs it seems the biggest speed up is to step back and decide if the basic algorithm is right for the job. changing that seems to have the biggest influence, but only after you've profiled the basic version.