Lousy programming practices
The author suggests that optimisation will always make code harder to read. My two most common optimisations to make do not do this:
1. Remove completely useless, irrelevant code. This *really* should not be my most common optimisation. However, most of the time when I get a new program or applet which has performance problems, the issue is that there's a significant block of do-nothing code. I find this to be insane, but true. This optimisation always makes the code easier to read. It is either a sign that the original coder was not attempting to do the simplest thing that could work, or did not refactor after a major code modification.
2. Algorithm optimisations. Sometimes these result in less legible code, but not always. Over the last few years, the most common algorithm optimisations I've performed have been changing to using associative arrays instead of numerical, in instances where numerical arrays are not helpful.
For example, a recent applet maintained a sorted list of records which could potentially include over 10,000 elements. The language provided a sort mechanism for simple arrays, but it provided no way to provide a comparator, and so the author wrote his own max sort routine, and included code in the various update routines to maintain sorted order. The fact that the list was sorted was actually never used - the search through the list was a simple linear search, which did not abort on finding the result; it simply stored it, and continued looking (despite the fact that only one result could ever be found.)
I converted it to use associative arrays, which were provided by the language. That eliminated the 30 lines for the sort routine, and it eliminated the half dozen or so lines at every insertion or deletion operation to maintain sorted order. It also eliminated the 10 lines for the search routine. Finally, it eliminated all of the overhead associated with that code.
The kicker: the language in question doesn't actually support numerical arrays. Instead, it fakes them with associative arrays. As such, there was no increased cost per access of data, even if the list size was 1; instead, the cost was reduced, because instead of having to reference into the array element's record structure to get the key, to see if it was the element I wanted, I just check for the key I want.
I've seen many programmers waste a lot of time on optimisations. Generally, those programmers were doing it incorrectly anyway. For example, the programmer who spent hours optimising the options processing on a program, using a wall clock to time the entire program executions after each code change. For reference, the options processing took less than a tenth of a second at the start of each execution; the rest of the execution took at least 15 minutes - and the program would skip the rest of the processing if the last option was --help.
Note that the last person I saw doing that was inspired by a prior optimization that I had made in the same program: removing a routine at startup which sorted an array of data that was no longer referenced anywhere else in the program, using an algorithm which gave its worst performance when working on already sorted data.