Some points lost in the general ranting..
OK, I am an audio engineer since 25 years. I know I might listen a bit 'different' to music and sound quality than the avareage iPod user, but I think Neil has some points in what he is trying to convey, albeit he both simplifies and exaggerates..
My five cents as follows:
1) Dynamic compression is NOT a part of audio data compression algoritms like AAC, MP3 and the like. I agree, El Reg should consider editing that, as it is misleading in a discussion on compressed audio quality.
2) There IS a quite easy method to clearly hear the effects of MP3 [ and AAC ] encoding of stereo material - as the encoding/compression process does not work with balanced stereo images but rather on a one-channel-at-the-timel basis. Since a lot of the spatial information is contained in the acoustics/revebation that tails the direct sound, and this information is processed by our brains based on the sum of the input to both ears AND most of this information is quite low in sound level, this stereo imaging is often destroyed/damaged by compression algoritms. To hear the effect without any double blind test ( and prove it, yes! ) you only need to invert the phase on one channel and mix the result to a mono signal, phasing out the main part of the audio information. You will be amazed to hear how the compression is switching reverb and acoustics on and off very sharply as the signal fluctuates around the threshold set be the algoritm.
No, this is not snobbery - even though the explation might sound like that. Actually it affects any type of music that contains passages of reasonable silence, like classical music etc. Just because many people today are used to a constant-level wall of sound with one song mixed into the next to AVOID silence at any price, it doesn't really mean that that's the complete definition of music... there are still millions of people out there who enjoy other styles.. ..like in Asia, where I live...
3) The artifacts of audio compression does NOT really involve frequency loss, even though lower sample rates does. The biggest artifact of audio compression is in the TIME DOMAIN - it adds 'delays' and 'echoes' to the music. Thus the sizzly cymbals ( it's a phase phenomenon ) and the tinny hihats etc. The timing between low and high frequencies is affected, which can not be corrected once it happens. To be fair, digital EQs does the same thing - a low-quality real time digital EQ can make any lossless playback sound MP3-ish..
4) The main problem - that is not addressed in any of the comments above - is that compressed - and re-compressed - files are NOT EDITABLE. The artifacts of audio compression becomes very obvious once you want to edit/process an audio file - just like editing a low-quality jpeg in Photoshop. Now, lots of music today is based on sampled sounds - most taken from other music recordings - albeit a whole loop or just a snare drum. This recycles the 'blurry hihats'etc. back into what is supposed to be prime-quality material, and we are - as I understand Neil claims - slowly losing our references.
So, it's the culture of ignoring sound degradation ( and I don't mean using it as a creative tool ), ignoring dynamics, adapting all music to fit the maximized super-compressed style of background music ( so you can still hear it whithout really listening ) that I think he is actually reacting on. I am anyway.
Does really easy, streamlined, no-listening-effort-required, stereotypical music streams HAVE to be the de facto standard for all music ?
However, I can't really see why Apple should be more to blame than anyone else.
Could it be because they are supposed to represent ambitions of quality in user experience....