"Everything between sample points is lost"
Mmmm. Messrs Nyquist and Shannon might have a bit to say about this.
Can people hear the difference between lossy MP3 digital music files and lossless ones? Opinions differ strongly, with much obfuscation around audio cables, mastering and hi-fi componentry muddying the waters. This article was prompted by commentard critiques of Sonos streaming Wi-Fi speaker/player reviews and audiophiles …
Mmmm. Messrs Nyquist and Shannon might have a bit to say about this.
What, you mean to agree with him? There is no error there: it stands to reason that if you are ignoring the input at any given point what happens during that time cannot be passed through to the output.
I think he means to make the distinction between the time domain and the frequency domain. Assuming perfect instantaneous sampling then everything in between samples in the time domain is lost. But, frequency wise, there's no new information between samples to miss.
I guess it depends on what you define the totality of the information to be. If you want to record all frequencies up to 20Khz then if you have regular samples at 40Khz, nothing is lost. If your low-pass filter is insufficient then aliasing may even add things that weren't there originally...
I think he means to make the distinction between the time domain and the frequency domain. Assuming perfect instantaneous sampling then everything in between samples in the time domain is lost. But, frequency wise, there's no new information between samples to miss.
But that's the whole point - you have defined a frequency domain. A real life audio signal does not keep to neat boundaries so something like a clash of cymbals for instance will reach well into ultrasound territory. If you are sampling at 44.1kHz that is going to be lost. The fact you are defining a region of interest - presumably some "human hearing" range - is itself an acknowledgement of that. The data is lost regardless of whether you were interested in it or not.
The key point about Nyquist's theorem is it starts with the assumption that the signal you are interested in is strictly limited in bandwidth. If that initial assumption is true, for example that you only want/need 20Hz to 20kHz, then by sampling above twice the highest frequency (say at 40.0001kHz) than you are NOT losing any information by sampling.
What is impotent is that 20kHz is an arbitrary value (but realistic limit for most younger humans, us old buggers are lucky to get 15kHz) and to avoid the very unpleasant business of aliasing you MUST be strictly limited to that value.
Since that near brick-wall filter is highly impractical for any analogue filter, what is normally done is to sample higher than that, either a little bit more on sample rate (like 44.1kHz) and use good analogue filters, or a much, much higher sample rate and push the band-limiting problem in to the digital domain where it is practical to implement good filters (but with time delay, but for recording that in not a problem) and then to re-sample at a chosen lower rate.
This post has been deleted by its author
"ultra"-sound meaning beyond sound, I presume?
You can't hear it; that's why we only pass up to about 20 kHz into the recording chain. If it's not present in the input, its not "lost" from the output. Nyquist's theorem is the law. The reproduction of a low-pass signal is essentially perfect - nothing is "lost".
Of course, no real-world implementation is perfect, but there's no big mathematical or philosophical discussion hiding here.
that's a bit of a muddle - if the signal has no energy above some frequency and you sample at least twice that fast - and the output filter is good enough - the output waveform is (essentially) identical to the input. Nothing is "lost".
This post has been deleted by its author
"FOR FUCK'S SAKE HAS NOBODY EVEN BOTHERED TO READ THE ARTICLE?
Read the quote again: "Everything between sample points is lost". If there is a signal faster than that it is gone. That is not conditional on anything. You can argue about whether it is relevant or not but it is gone never to be recovered. That is precisely what Shannon said."
Strident, profane, but wrong. If you understood sampling theory you'd know that frequencies above half the sampling rate are not "gone"; their energy is aliased back down into the baseband (the spectrum is effectively "folded" back on itself). It's actually more complicated than that, but that's close enough for this discussion. The article's phrasing is unfortunate, at minimum, but I lean toward "wrong".
There is no error there: it stands to reason that if you are ignoring the input at any given point what happens during that time cannot be passed through to the output.
This is true if and only if there is no deterministic relationship between the samples and the unsampled data (i.e, there exists no function f where f(s) = u.)
This sentence is fundamentally wrong. Nothing between the sample points is lost. The continuous analogue signal can be perfectly recreated from the samples.
Please please watch this excellent video made by someone more knowledgable than anyone on this forum: (watch between 4:00 and 6:00 if you don't have time to watch the whole thing)
(For the pedants, yes, this assumes that the signal being sampled does not contain frequencies above 22.05kHz. Obviously this filtering is always done to the signal prior to sampling)
Mmmm. Messrs Nyquist and Shannon might have a bit to say about this.
I'm sure you feel such a big boy quoting those names. Pity that it doesn't automatically make you right or knowledgeable, indeed it simply shows that you missed their central tenet. Encode a 100kHz signal at 44.1kHz and then regnerate the wave from the sampled data. That 100kHz signal is not present in the output. If it hasn't been lost then where has it gone. That is the whole point of Shannon-Nyquist - the sampling frequency determines the maximum frequency that can be sampled.
From there you get to events that occur faster than the sampling frequency can't be captured which remains true however those samples are captured - I see another poster is bringing in whether the samples are instaneous readouts or integrations which is an utter irrelevance - the principle holds regardless of the sampling methodology.
The article states that events that happen faster than the sampling frequency can't be represented. That is true. So again, precisely what is wrong with that quoted text?
This post has been deleted by its author
Nope. Sampling includes filtering to get rid of the aliased copies. If it didn't, it would sound really, really horrible.
As other posters have pointed out, the filtering is never perfect, so sampling is never Nyquist-perfect either. Sampling at higher frequencies and higher bit depths should have fewer imperfections, although if the hardware is a bit pants anyway high-rate sampling won't make a huge difference. (And if it's really pants it can make the sound worse).
But the killer problem for digital is clock jitter. If the sampling clock isn't rock solid to nano-second precision, you can forget Nyquist, because Nyquist assumes perfect sample timing. A lot of the smeary-splashy-nasty sound digital used to be famous for was caused by cheap jittery clock sources.
If your DAC can accept an external clock, hooking up a studio-grade clock source will do the sound many favours. It will also make the differences between FLAC and MP3 more obvious.
IME I can hear the difference very clearly, and the MP3 sound is seriously fucking annoying, even in a car. But my gf, who is a classical musician and can pick out the notes in chords by ear, is fine with MP3s. She hears music as pitch lines and very fine timing details, and most timbres as a placeholder. MP3s include all the detail she needs.
In fact everyone hears differently anyway, because everyone's ears are a slightly different shape, so we all have different acoustic filters stuck to sides of our heads. So it's maybe not a surprise some people strongly prefer FLAC and others don't care.
Last point - CDs aren't really lossless. Because of dirt, scratches, laser servo issues and other inaccuracies, most players drop back to Level 1 error correction at least some of the time, so there's always some quality loss.
A good CD rip will be bit-perfect with multiple read passes made to minimise errors, so FLAC will always sound better. (I was amazed by the difference - so amazed I spent a few months ripping and selling off all my CDs.)
If you just sample a low-pass filtered signal, even with a bit of jitter, and play it back, you won't add appreciable artifacts. Yes, there will be some, but the artifacts inherent in MP3 and related algorithms are orders of magnitude higher.
MP3 divides the signal into frames, and performs a Modified Discrete Cosine Transform to the signal. This transforms the signal from the time domain to the frequency domain. Then, it compresses the MDCT coefficients by quantizing them, guided by a psycho-acoustic model.
(Psycho-acoustic model means: "We've algorithmically determined that you can't here this thing we're throwing away." It's based on many studies of the masking effects inherent in human hearing, such as not being able to hear certain sounds after a loud plosive sound, etc.)
Quantizing in the frequency domain adds non-causal artifacts to the signal. What do I mean by "non-causal"? You can get what some call a _pre-echo_ before sharp time-domain discontinuities in the input, such as percussive sounds. Pre-echo is what makes percussion sound "muddy" or "blurred". You start to hear a snare hit or cymbal before it's been hit.
That's why I call it non-causal: Analog filtering and properly designed digital filtering don't change the leading edge of a discontinuity; rather, there's an impulse response that appears after the discontinuity. But, with frequency domain quantization, the artifacts get spread to both sides.
You've likely already experienced this elsewhere: highly compressed JPEGs and MPEG video! Take a look at what JPEG and MPEG do to areas of sharp contrast, such as text. You see "sparkles", "ringing" or "mosquito noise" to all sides. Both are based around a similar frequency domain transform, the DCT, and both perform similar quantization, only in two dimensions (horizontal and vertical) rather than one (time).
But the artifacts arise from the same place, mathematically.
If you read the design documents on Ogg Vorbis, they're very sensitive to the issue of pre-echo.
There are other artifacts I can hear in MP3 (especially heavily-compressed MP3) that others don't notice. There's burbles, the occasional tone that sounds like Morse code, and so on. These too are artifacts of popping to the frequency domain and quantizing frequencies to varying degrees.
As for the idea that "most people hear differently:" Because I've worked with our digital video folks, I'm quite sensitive to video artifacts, including DCT artifacts, but also spatial domain quantization (resulting in "contouring") and so forth. My wife and friends never really noticed many of these until I started pointing them out. Now they hate me for "ruining" them. ;-)
All that said: I can definitely hear artifacts in 128kbps CBR recordings, fewer in 192kbps VBR recordings, and rarely or never in 256kbps or 320kbps. At 320kbps, you're only compressing about 3.5:1 or so, and so you're leaving most of the signal intact.
Likewise, I rarely notice JPEG artifacts on something compressed with 90% or higher quality, but then the compression rate also drops significantly compared to lower quality levels. At that point, if it has a lot of text, you may be better off with PNG anyway.
This post has been deleted by its author
> But if you can't tell the difference already, what "breakthrough" could possibly improve the experience?
Well there a lot of things you might want to do spacially, or spectrally, with the sound to spread it across different speakers, etc.
The problem is, once you encode audio with a perceptual codec like MP3, the resulting audio makes sense to the human ear in a very specific set of circumstances: that it's played in stereo through a standard listening system.
Outside of this, it breaks down.
The reason is straightforward: MP3 primarily removes parts of Sound A because Sound B has played before, or simultaneously, making some parts of Sound A inaudible ("masked").
Now, if you go changing the sound, dividing it between speakers, extracting elements, etc, the Sound B is played back differently and hits the ear differently, meaning different parts of Sound A get masked, and the parts the MP3 cut out of Sound A become obvious (it sounds akin to an MP3 with a 64kbps bitrate, or crap YouTube vid, with all the swirling noise and distortion).
Similarly, if you need to digitally analyse the sound for some reason, the loss of data is very significant, as it was encoded to sound good for a person, but the signal doesn't make sense in its own right - the removal of all those chunks of audio looks like weird noise added to the signal.
The ability to tell the difference depends on 3 things:
1) The original quality of the recording.
2) how good your system and ears are.
3) What sort of MP3 compression is in use.
Number (3) is critical, if you are using 128kbit fixed-rate coding then I am pretty confident you will tell the difference, if you are using 320kbit variable-rate I would doubt most could.
Also probably 6" drivers in MDF, chipboard, or possibly plywood baffles / boxes.
Not 4" drivers in a plastic case, which can often be poorer than £2 ear buds. Over the Ear 'phones' actually are harder to do properly than "in ear" earphones.
My experience is with "home cinema" situations. Most of the cheap rigs you can buy for that have small speakers for the directional channels—the surround sound—and a subwoofer. But you can hear when all the bass is coming from the front, rather than from the same direction as the high-frequency element.
My father's hearing was pretty bad, but having background noises coming from different directions than the character speech made a big difference. Even just putting the normal TV sound through a Dolby Analogue Surround decoder can help (It makes me wonder what some of the "bad" TV sound really is.)
Anyway, most stereo recordings position sound sources merely by relative volume. They don't bother with phase. It's a trick. You don't hear anything now of binaural stereo, which did try to capture the phase differences. The modern digital surround sound does stand a chance of using that data, but speaker design tends to ignore frequency-related phase shifts.
Your show-off hi-fi audiophile with the big speakers isn't exactly wasting money, not until he gets into exotic speaker cables, but he may be missing out on a lot by sticking with such an ancient sound recording standard. Stereo sound is still faking it. So might a 5:1 recording, but it doesn't have to throw away so much data
In short, if your source is 96khz or higher, try something like FLAC. If it is CDDA, your wasting space with FLAC. That opinion is not from someone who is a technical expert on sound, but from someone who has remastered 100's of songs.
FLAC is severe overkill when your input is CDDA. There is a much more significant visual difference between BetaMax and VHS than the audible difference between FLAC and MP3@256 if your source is CDDA (mp3@320 and you're sort of wasting space again). So, how much visual difference is between Beta and VHS again?
In 2003 I started remastering individual songs that had never been remastered using a plethora of DSP software (no big boy hardware :-( ). If I couldn't find a studio master (which were far easier to find then), I would choose CDDA. At some point, I came across a master of Moon Safari by Air (not for remastering, but because I liked it). The master of course didn't sound the same as the CD I had, so I looked for my CD for comparisons, but I couldn't find it. In desperation to find the differences by comparing the master to my CD, I substituted .mp3 files for my CD as the source.
A week passed and the differences became apparent, however I still wasn't using CDDA, so I acquired the CD version to ultimately compare CDDA to .mp3. What I found between my 256.mp3's and CDDA was eye opening. Not only was 99.5% of the audible information there, but more importantly, the information that was NOT in the CDDA but in the master, was also not present identically in the 256.mp3's, and the .mp3's were sourced from CDDA! If CDDA didn't have it, .mp3 didn't have it identically.
You really can't remaster from CDDA, but whatever you can make better out of a CDDA, you could do just the same from decompressing a 256.mp3. I lived it.
Supposedly, Nilesh Patel. However, I'm positive that he wasn't the one that put it "out there" back then. It was probably some intern in broadcasting at the label or something similar to that in this case, or in all cases truthfully. I haven't looked for any masters in years, but last I looked about 6 years ago, I couldn't find 1 person that had any, new or old.
BTW, here's this if you care http://www.dummymag.com/news/legendary-mastering-engineer-nilesh-patel-dies
:-( Yeah, legend indeed. I'd have loved to have had the chance to send a mix his way.
Yeah, the reason I ask is that it seems bizarre to me that there would be any audible difference between the masters the mastering engineer made, and what was on CD, given that that would be the delivery format from the mastering engineer to the label in most cases. What leaves the mastering studio ends up on the CD with bit-perfect accuracy.
So if you were noticing a difference, I wonder were you getting the *mix masters*, from the mix engineer, before it got sent to mastering?
@142 "...I wonder were you getting the *mix masters*"
Actually, those are what I wanted, but what I received was 1 file of a 2-channel mix almost 4GB in size. Sometimes these masters came in multiple channels, most times not (30/70). Being 4GB in size, I knew immediately it wasn't what I wanted, but how could I refuse :-).
At that time what I wanted was a proper 5.1 mix, I still would. It's been too long to remember the exact numbers of anything. The only things I remember are that it was 2 channels, it had a few tracks that were not on any release (at that time) and it had a much higher sampling rate. I think I read some years back that a new vinyl cut had been released, apparently the finest release of the album, but I never investigated.
I still wish that the RIAA would of made an initiative to forcefully archive higher quality versions of ALL albums that were digitally distributable in at least DVD-A spec (5.1 24b/96khz). DVD-A spec has been around nearly 20 years and is now considered archaic. However, it's still far better than CDDA in all aspects (doubley better). It makes me wonder why people consider loseless CDDA at all when CDDA itself is extremely lossy compared to so many alternatives (new and archaic).
For me, comparing CDDA to MP3 is exactly the same as choosing which flash drive to store text documents on: 16GB or 64GB? I'll take the one that gives more space.
MP3 is a pretty smart piece of sound engineering, dropping sounds that most of us can't perceive, but it would be foolish to think there can't be better.
If I had something on a lossless file on my desktop machine, and something came along that would measure my hearing and compress in a way that took my personal limits into account, the result might be better for my ears, and compress better than MP3.
Fanciful? Well, the big problem is the measurement of the response of my ears, but they only get up to about 12kHz these days, and compression suited to J. Random Teenager could include a lot I cannot hear, so maybe a personal compression standard would work.
But if there's something like that, we're going to need a lossless source for the compression.
You might have a look at
where you can hear what MP3 (the MPEG1 Layer III audio codec, as defined in ISO 11172-3) discards by opening one of the links in the table with dBA values. It is pretty horrible.
My impression is that MP3 leaves the sound intact, but compromises the emotional impact. May be a boffin with access to an fMRI scanner can do some research on this.
But if you can't tell the difference already, what "breakthrough" could possibly improve the experience?
And what if you don't care? I've been buying music (defined broadly) for three decades, and I consistently find myself unable to give a damn about fidelity. There are songs I enjoy listening to, and I enjoy them just as much from a bargain-basement MP3 player and earbuds as I do from a CD and fancy audio components (when I hear them played on someone else's system, since I don't own any player that cost more than $30).
Yes, I understand that many people do care; but some of the codec warriors don't seem to understand that not everyone shares their passion.
That said, when our phones all have terabytes of storage, we'll probably all use lossless.
I'll use whatever format it comes in when I buy it.
The reason I use FLAC? It's my archive copy. CDs get scratched, lost and broken. They also degrade over time, especially in hot climates. My FLAC copies are backed up in multiple locations. Versus the original CDs: they are searchable; they are streamable; they are shareable en masse. I can also re-encode them quickly to AAC for use on portable devices and when AAC is surpassed by some other format in order to cram yet more music onto small devices I will re-encode to that.
"I'd love to get more listening pleasure from the music and hear it as the artist intended."
Go to live performances in the flesh. Unless they are miming then you get a different performance every time. You also get the rapport building between the performers and an engaged audience - and that's something you can't put in a bottle.
Afterwards when you play a recording your brain will override what you are actually hearing - by "replaying" the emotion of the concert. IIRC an academic study was done on this effect a while back.
If that was true after seeing The Who yeas ago at the Toronto Sky Dome* I would hear the sound of an AM radio in the trash bin of a large tiled washroom now.
*lucky I won tickets from a local radio station contest and didn't pay the silly price that floor tickets were going for.
"Go to live performances in the flesh."
Apparently one should close one's eyes to listen to a orchestra. It prevents one from witnessing the brass section turning their horns over and over to drain copious volumes of spittle onto the floor, leaving glistening patches of bubbly drool there, shimmering under the lights. It was difficult to keep the bile in place. Music? What music? I don't recall any music. Just saliva. Night of the Saliva.
At least nobody bit into a freshly killed chicken.
Recorded music has its advantages. Fewer diseases.
> I think I can detect an instantly perceptible MP3-FLAC difference
Maybe you can - but does it really matter?
Most people I know listen to music as a form of entertainment, generally as relaxation. They don't listen to it on the assumption that they will be tested on it's content and clarity after hearing it. Likewise, they don't listen, eagle-eared. waiting for that instance in the third passage where the conductor's tummy rumbles - or where you can hear the tube train rolling past the recording studio.
Having said that, the first time I plugged in my home-made transmission line speakers (still with me 30+ years later) and cranked up Wish You Were Here it was a bloody revelation. I have witnessed similar reactions when I have plugged in a basic 2+1 speaker system into friends' flat-panel tellies: where did all that sound come from? after listening to tinny audio for an age and not realising there was anything better.
Although those step-changes are huge. Whereas the difference between an average quality MP3 and a FLAC is perceptable - but you're merely detecting the difference, not listening to it. And as soon as someone in the upstairs flat farts, or a car rumbles past, the difference vanishes. As it also does on anything less than my TLs.
One friend can definitely tell the difference between lossless and the various lossy codecs. At decent bitrates, apart from some artefacts with OGG, I can't. Even when listening via his kit.
I am tempted to get him to teach me what to listen out for, but afraid that if I do, I will always hear the differences.
If the original source is poor (either technically or musically) or the speakers (or headphones) are not of very good quality then a high rate MP3 (256kbps or higher) is not likely to be distinguishable for the vast majority of people.
(Most headphones under £50 and most speaker systems under £500 cause far more alteration to the music than a high rate MP3 produced by a reasonable encoder. The rest of any modern sound system is so much better than the speakers (or headphones) that the speakers are the determining factor.)
For some modern "music" almost any change would be an improvement (especially the mute button!!) - a description (all too apt) of one type said that it sounded like someone kicking a metal dustbin half filled with glass bottles down concrete stairs while cursing it !!!!
Most headphones under £50 and most speaker systems under £500 cause far more alteration to the music than a high rate MP3 produced by a reasonable encoder.
Many mid range speaker systems have very good output. The problem tends to be crap acoustics: it doesn't matter if you spend £3000 on your speakers if you stick them in the corners of your living room - they're going to sound like crap and the price tag is more to do with pose value than acoustic fidelity.
At the other end of the spectrum there are some very good studio monitors from around £150/pair. When they get specified it isn't because the studio are trying to cut corners - it's because the people installing them know how to get a pair of speakers to sound right.
I build my own speakers, but if I were to buy any, I would probably go for professional active studio monitors. The prices one sees high-end consumer gear going for is a joke.
You're right about room effect though - it often amazes me where people stick their speakers.
>(Most headphones under £50 and most speaker systems under £500 cause far more alteration to the music than a high rate MP3 produced by a reasonable encoder.
Some headphones over £50 are pretty terrible in this regard; not to mention any specific brands that may have been pictured in the article.
My father was a hifi junkie. . . read all the magazines, saved up and upgraded his components over the years, and made the rest of the family help him do blind hardware comparisons. The result was that he had highly trained ears and could really tell the difference between these things. But I do wonder if all he really achieved was making himself more fussy.
I think for him the best possible sound experience would have been some mythical perfect set of stereo equipment that would reproduce every last nuance as if the whole band we right there in front of him. I tend to think that a good rock song can always be enhanced by the roar of a V8 and the smile of the girl sitting next to me (at least this is how I imagine it working.)
I suspect I probably love music as much as he did, but I can enjoy it through relatively cheap headphones and whatever my current phone is. I think he was so in love with the idea of reproducing it with maximum fidelity, of hearing every last instrument and sound, that maybe it became more of a technical exercise than anything else. I try to make sure than I don't get too interested in perfect sound reproduction, as I worry it will only hamper my enjoyment in the long run.
And in any case, he left me his stack of Cyrus kit and PMC speakers, I'm good.
> But I do wonder if all he really achieved was making himself more fussy.
Ignorance can be bliss when it comes to low quality music. When you train your ears and mind to pinpoint compression artifacts, you can't turn it off. Suddenly, all of those 128 kbps MP3 audio files you grabbed from Napster in the 1990s are garbage to your ears.
> maybe it became more of a technical exercise than anything else
Fixation with perfection can be a really bad thing. Reminds me of the chase that Japanese radio manufacturers have with distortion levels. Most people can't hear levels below 1%, yet during the '80s and '90s, the average THD for a receiver dropped into the hundredth and thousandths of a percent. Great on paper, but was it worthwhile?
"during the '80s and '90s, the average THD for a receiver dropped into the hundredth and thousandths of a percent."
They went beyond that, I remember ads with 0.000005% listed in big bold type. How would you even measure that? And some had audible IMD but that was not listed on the box so no problem. It was the marketing fad at the time. Like who could have the most drivers in a speaker (Bose tried to sue consumers reports for saying their 901 speakers were bad). Someone should be along any second to say how great 901s are...
>Suddenly, all of those 128 kbps MP3 audio files you grabbed from Napster in the 1990s are garbage to your ears.
I think 128kbps is low enough that most people could pick which is which for some types of music. If you double it to 256 and add some VBR, the difference is physically inaudible to most the population. At 320, you would have a lot of trouble in a double blind test to pick one from the other.
That is not to say that it doesn't sound better to you. We know for example that the placebo effect is real. Someone who is told that a particular medication will help their breathing performs better at altitude tests than control even if the medication is just a sugar pill. I am in no doubt that someone who knows they are listening to a lossless encoding will experience in their brain a better quality of sound.
Nonetheless, your point about 1990s tracks sounding like garbage is correct* but that has little to do with how the music is encoded ;)
* especially only happy when it rains.
If you dial down the quality of mp3 enough, anyone can tell. It's no different from video compression.
So the question is 'what quality do I need?' and that's answered quite easily by a test!
Most of the music I listen to on the go is compressed with Ogg at a bitrate of 320kbps, and it's good enough that I can't tell between that and the flac original. At half that bitrate it sounds shit to me, especially classical music.
What the upper limit is before nobody can tell the difference at all I have no idea, but it's irrelevant, because the mystique of the 'audiophile' world kicks in long before, with people requiring the room to be painted in red matte because of the oxygen molecules or something.
My entire music collection is stored in FLAC, mostly ripped from CDs. I carry it with me in multiple microSD cards.
Why the hell not? Storage is dirt cheap, especially considering that people paid $200 for a decent CD-walkman and wallet 15 years ago.
Can I tell the difference between a 1000kbps FLAC and a 192kbps MP3 in a blind test? Most likely not. All I know is that in 2014 I'm bloody sick of hearing compression artifacts in dance clubs, on the radio, in other people's cars, and I enjoy the peace of mind knowing that I have a bastion of unbastardized harmonics available to me in my pocket whenever I want.
If I'm on the go, I'll mostly stream my music from home or have it with me. MP3 makes a HUGE difference in size compared to lossless, both in storage required (fits all on player or not), and bandwidth. And in 99% of the cases, I'm on the go (background noise) or at work (no HQ audio gear), so good luck hearing the difference.
Yes, I can hear the difference between MP3 and lossless, but it makes no sense to use lossless in many occasions
"So the theory for higher-fidelity playback of stored music through the Sonos system is to get a FLAC copy of the music, convert it to ALAC, import that into iTunes, re-set the Sonos music index, and then play the music."
WHAT ? Why are you messing about with all these steps. To play flac:
1). Rip the CD to flac format onto your NAS drive.
2). Re-index the SONOS music library.
3). Play the flac file on the SONOS from your NAS drive.
That's what I do...
Oh, wait a minute. iTunes and Apple - there's your problem mate. FLAC is a *Free Software* created format. That's like garlic to a vampire for Mac's :-).
>Even easier still, just put the CD in the drive and listen to it.
Er, on a Mac? I have always found them to be well behaved and reliable machines, except for their optical drives which sometimes exhibit a tenacity for holding onto CDs rivalling that of a neurotic spaniel with a tennis ball.
If you compare spectrograms of properly extracted source audio that is properly encoded as FLAC against a high quality lossy file WITHOUT any filters then you will notice they are the exact same. I'm talking maybe a pixel difference here and there. You could screenshot both spectrograms, load them into Photoshop, and cancel out every pixel which is the same and you'd be able to visualize the (extremely rare) difference. This is an imperfect representation of an audio file, but it's the best we have and likely ever have. Imperfect as it may be, the proof is in the listening.
The greatest area of loss in a lossy file is in the extreme high and low frequencies as well as the stereo image. Kill all filters and you're left with the question of stereo image. Even MP3's joint stereo is great, but ogg's "point" stereo implementation is better. Comparing both, at high quality, yield few noticeable differences to the lossless source.
As always, it's about the way the tool is utilized, not so much about the tool itself. This has really always been the way it is, back to the days when you simply played loudly into a horn that caused a stylus to directly cut the wax. This has only become more true since entering the digital domain. Anyone who claims they can hear the difference between filterless CBR 320 mp3, VBR 224-320 mp3, CBR 500 ogg, FLAC/other lossless formats, and the lossless source is lying.
IME: The in-built speakers of cellphones and laptops are pure sh!t (it's scary that people actually listen to music that way). Earplugs are usually crap. Cellphones are often crap (my Moto G is god-awful). Headphones can be anywhere from god-awful to very good. Computers are usually good. Desktop speakers are usually bad to decent. Real speakers are usually good.
As for lossy vs lossless, I (and probably everyone who's interested in this kind of thing) have played around with encoding software, and on my midrange/decent gear I can easily hear the difference between CD audio and 128 Kbit lossy, but it's really doubtful at 256Kbit lossy, and I hear no difference at 320Kbit lossy.
But, of course, if I load it up in my Moto G and listen to it through my earplugs, then everything sounds equally sh!tty.
I really liked this article because I believe there are major differences in the way audio is perceived.
You come up with some great arguments, and arguments I had with myself for a while. I was utterly convinced I could hear differences between mp3, ogg, wav, flac and CD. I mean totally convinced.
My friends said I was mad, but my total belief was that I could 'feel' differences in the way the audio affected me.
I worked in radio for a while and ended up looking at audio for a long time, and trying to see that 'thing' which I could hear the difference in. I couldn't. I just edited that bit of news or whatever to the finest point. Sometimes things in audio just 'worked', and don't ask me how.
Then, one day I realised there was a difference in the formats. I found how to test it.
I had the entire BBC Sound Effects Library at home on CDs from work while I was building an anti-drink-driving audio campaign. The BBC library is fun, and I found a CD, number 44, 'Cats'.
Being a Cat owner, I ran some of the CD tracks through the speakers and suddenly my 3 cats were in the room!!! Especially the howling cats sounds! This was weird!
Because the WAV rips were huge (I needed to put the CD's back in the office), I converted the WAV's to flac, having kept a copy in wav of the effects I needed, but still enjoying annoying the cats.
Flac and wav playback ALWAYS got the cats back into my studio, looking around to see where the 'other' cat was crying.
If I encoded that same sample to MP3, the cats here slept on and didn't bother. They looked around if it was ogg.
This taught me one thing. As a human I might not hear the finer bits of a sound, but we feel them.
Animals are great markers of sound, but we still feel those sounds in a way, they are air compression after all.
Anyway, if you want to annoy your Cats or Dogs, the BBC Sound Effects CD's are fun...
I once bought a record player on the recommendation of the previous owner's cat. I was buying second hand and had taken Pink Floyd's Granchester Meadows as a test track. When the birds started chirping the cat attacked the speakers. If the audio was good enough for the cat, it was good enough for me.
Can you tell the difference between MP3 and linear (PCM WAV/FLAC/lossless) audio? All depends on the encoding bitrate - personally, I don't reckon I could tell the difference between 320k MP3 and linear, but at 96k? Hell, yes.
Take a piece of music you know - if it's got a piano in it, so much the better - and encode it at all sorts of bitrates. Then listen to them vs. the uncompressed version: start at the horrible-sounding low bitrate ones, and see how far you need to go until you can't tell the difference.
At the commercial radio group I work for, we distribute our production audio uncompressed, and our broadcast streams internally as 384k MP2, or 345k aptX.
Spot on, AC.
Its a sliding scale:
Low bitrate: 'Metallic goblin laughter' artefacts clearly audible to everyone.
High bitrate: indistinguishable by ear from source.
Of course, storage is so cheap that is is a no-brainer to rip CDs as FLAC, if only to be able to transcode them to any desired future format without cumulative compression artifacts creeping in. This is much the same philosophy as scanning photographs at a higher than required resolution and saving them in a lossless format (so to avoid any possibility of jpg > edit > jpg > edit > jpg etc jaggedness occuring).
"Kit is everything - I blew a stack of cash on top end kit (10 Grand), and I could tell the difference between a bought CD and a CD-R of the same album - the difference was tiny, but I could tell every time. 320K MP3 is good enough for most situations"
How is this even possible when the CD and CD-R are bit for bit identical?
If it's a digital clone then you're right, there should be no difference, the CD and CD-R should be identical.
But if it was a CD and it was ripped to a CD-R in the common way of converting it to MP3s and storing the files on the CD-R then they are very different.
We don't have enough information.
There's no parity on CD Audio, unlike CD-ROM. Therefore, as the bits fly by, you cannot be sure that you read the disk absolutely correctly. (If you read the CD spec, it's a miracle that the bloody thing worked at all). Small errors may have been undetectable audibly but that didn't stop player manufacturers doing oversampling to get a better handle on what the bit value actually was.
The question at the time was whether the bits you burnt on CD-R were "sharper" than those on commercially manufactured CDs and therefore sounded better because a) less quantisation noise occured and/or b) less processing being done by the DACs.
Actually, the Reed–Solomon coding used in audio CDs provides 8 bytes of parity data for every 32 byte audio frame. What is missing in audio CDs is a cyclic redundancy check (CRC), which is more robust and could be used to detect rare situations where the simpler parity check fails.
And the whole idea that the audio on a CD-R sounded different than a traditional CD is mostly garbage. The payload of the data frames will be exactly the same between both types of media, so the decoded audio should be exactly the same. The only difference comes from the higher number of read errors that a player will encounter with a CD-R, necessitating a trip to the imperfect error correction routine. People who thought they were "sharper" were fooling themselves.
CDs are stamped, and frequently the hole isn't in the centre of the disc. CDRs are burned whilst spinning around the hole in the centre. One theory put forward was that the extra movement of the laser head back and forth as it tried to track an off-centre disc somehow contributed to a poorer sound, be that for physical or electrical reasons.
I am one who has tried this test of copying a CD to CDR and can concur that sometimes there are very obvious differences between the two - and it wouldn't need a well trained ear to hear it.
WRONG - there is a HUGE amount of error correction coding on an audio CD. Cross interleaved Reed-Solomon coding is used that can correct error bursts of up to 3500 bits. The audio data rate is 1,411,200 bits/second, the data rate including error correction and other information is is 4,321,800 bits/second - about 3 times as much. During one of the early demonstrations of the tolerance of CDs to errors, Phillips (one of the inventors of CDs) showed a CD with a hole in the data area still playing perfectly.
showed a CD with a hole in the data area still playing perfectly.
Anyone remember the TV demo, I think it was Kieran wossname on Tomorrow's World, spreading strawberry jam on a CD and showing that it still played perfectly? Impressive, but it still made me wince...
The hole in the track was part of the setup of professional (broadcast) CD players to ensure they maintained tracking across damage - there was a test CD with a series of increasingly sized holes to set up the servos. There was never any intention on a CD that the data should survive such damage, merely that you woul<tick>you woul<tick>you woul<tick>you woul<tick>you woul<tick>dn't get stuck in the 'groove'.
From memory - it's been a long time, (bloody hell, thirty years!) - the CD first tries for error recovery/correction from the parity/interleave, then tries interpolation of the signal for the duration of an error, then momentarily mutes (or holds the DC level) and only then gives up in disgust.
For what it's worth, the CD audio data is self-clocked, but uses a constant radial velocity drive to ensure the bit rate is approximately constant throughout. The bit encoding uses eight bits of thirteen (so a lot of wasted bits) to ensure sufficient transitions exist even on silence to clock the output.
Had memory been cheaper when the CD was invented, it might have been that reclocking the data - through a first in-first out memory buffer - would have been used; it would have made a noticeable difference to the audio output.
>Had memory been cheaper when the CD was invented, it might have been that reclocking the data - through a first in-first out memory buffer - would have been used; it would have made a noticeable difference to the audio output.
It was featured on Sony 'Discman' portable CD players as 'ESP' - electronic shock protection. It was featured on all MD players because, like a computer HDD, the data was always stored sequentially (you could delete or rearrange tracks on a MD). The amount of buffer varied depending upon the model of Discman / MD player you had - the pricier models tended to have more solid-state memory, expressed in 'seconds' of anti-shock protection.
The Sharp 722 MD player (which has a scroll wheel in 1998) would play reliably in a pocket whilst walking - the cheaper 702 player would occaisionally have to catch up on itself.
This was a year or so before the £600 5GB iPod, and before 32MB (yes, MB) MP3 players were seen in Currys.
[Side note: If Sony hadn't been so awkward about copy protection and propriety formats, a proper High Density Data Minidisc (later versions could do around a GB, normal MDs were about 100 MB) could have pre-empted the iPod's impact on the market. Instead, we had SonicStage software and beautifully designed 20GB Sony HDD players that could only play ATRAC - not even MP3! - years after the iPod. Silly Sony.]
The best way to do it was always to hook the CD player up to the MD recorder via optical. Some of the MD hifi separates also had a PS/2 socket so you could plug a keyboard in for titling.
To this day, I'm convinced that the software Sony distributed for managing NetMD devices was partly responsible for the demise of MD. Their refusal to make the USB-MD interface open only compounded their errors.
"How is this even possible when the CD and CD-R are bit for bit identical?"
While I do not quite buy the fact that you can hear the difference, your response is far from correct.
The cause of trouble is what a CD really is: it is a clocked data signal. It isn't a data medium with a lossless file on, that you can copy bit-perfect.
Which is why, when ripping CD, the gear and software matters very much. The gear, for its accuracy, clock-jitter, error correction. The software for its ability to operate the drive correctly, and support of accurip.
If you can buy the studio-quality FLAC (and not some converted mp3), take it over the CD!
Haha, this reminds of supposedly lossless, hi-def files from qobuz. One evening sitting by the computer, I found that they have Mozart C-minor Mass directed by Herreweghe, exactly same record I enjoyed earlier the same day from my CD (it was not ripped then). Being lazy (or experimental) sort of person, I decided to stream the music rather than put my CD in. It played nicely, up to solo soprano when it started clipping quite horribly. Compare same part with my CD - no clipping and my poor underpowered mini system played this part rather quietly, but cleanly. Turn down volume on my computer speakers (active Samson studio monitors attached to Epiphany Acoustic DAC) streaming from qobuz, and I hear clipping again. So, I ripped my record to FLAC just to play it on the same equipment and there you go, lovely and clean sound. It turns out that "lossless, hi-def" qobuz files are totally messed up, they probably never checked the final result of whatever conversions they were doing.
I cancelled my qobuz subscription the same day, and from then on I only use FLAC files I ripped from my own CDs, using equipment and processing I trust and know. Or, when I do not care about quality and just want to listen to something different, it's from lossy source such as Spotify. And my "poor underpowered" mini system got an upgrade in the form of better speakers :)
Although the analogy is not really relevant, I have to ask: have you never had a corrupt file after downloading from the net? Really?
To get back on track, though: has no one else here taken a disc to a friend's, only to find they have the same disc but for some reason they don't sound the same?
I get the point that there is error correction, so I can't offer an explanation - but you have to bear in mind that a pressed CD comes from a stamp that gradually wears out during its lifetime until it needs replacing. In some ways this is more like having an 'analog' CD. Two CDs made on side-by-side machines won't be exactly the same, but again the EC should compensate.
Maybe someone more knowledgable in this area of the industry could comment?
For me my 1972 KEF speakers beat anything modern, probably because its what I'm used to after 40 years. I recall back in the late 80s one 'audophile' mate groaning that I didn't have a CD player and one day came round to give me his old one. A few weeks later he was round and some music was playing and he said "See how much betterCD is!" What was playing was some 8 yo vinyl. A couple of years ago I was looking to bring the KEFs back into use having been persuaded to put them in the attic and try modern gear. I looked about for a solution and got amplification suggestions ranging in the several $100, a sound engineer relative recommended Classe http://www.martinshifi.co.uk/brand/4/classe/
Someone else said you can always extract money from people with big box and recommended another solution. A few weeks later the relative came over and said "Nice sound but where is the Amp" the rest of the weekend he stared at the thing and muttered but ... but ... but... but
as the other guy said (Trent Reznor has one of his guitar amps) the technology that went into computers in the last 40 years also went into audio.
I bought a cheap Tripath for a low-fi solution (ceiling speakers in the kitchen area).
I still don't have it connected up to the ceiling speakers, but I did try it out with some hi-fi speakers just for fun.
The results were surprisingly good.
I can hear a difference between the Tripath and my Marantz amplifier but it isn't much compared to the cost difference.
On the subject of subjective hearing - isn't one of the key areas the DAC?
I suspect that the DAC in say a mobile phone (or even a PC motherboard or sound card) isn't designed for the highest fidelity sound reproduction, but more for cheapness, size and power use.
However good the digital, it does have to be converted at some point to analogue and I suspect that a poor conversion may negate any significant differences between the digital formats.
I bought a stand alone DAC to test this and I can convince myself that I can hear a significant improvement in the sound when playing through my PC and USB to the DAC.instead of using the on-board DAC.
Haven't tested it with phone or tablet yet, though.
Just to clarify, Class D doesn't mean 'digital'.
The Tripath Amps have enjoyed good reviews, especially given their price. There seems to be people who buy the inexpensive ones, and then upgrade the capcitors themselves.
For more on Smartphone Audio, Anadtech have one of these http://www.ap.com/products/apx585 and have produced graphs and everything:
For each set of tests we can add a load, simulated or real, to see how the device handles more demanding headphones. For this article I am sticking with only a set of the updated Apple Earbuds. They are probably the most common headphone out there and easy to acquire to duplicate testing. For future tests the other loads will be AKG K701 headphones and Grado SR60 headphones. Both models are popular, and I happen to own them.
There are a few main tests we are going to use for all these reviews. Those key tests are maximum output level, Total Harmonic Distortion + Noise (THD+N), Frequency Response, Dynamic Range (as defined by AES17), and Crosstalk.
Um, you can't make such a statement without clarifying what you mean by mp3 and what is being played. A 96kbps CBR sounds very different to a 320kbps VBR. Most people could pick the 96. Almost no one could pick the 320 in a statistically significant way in a double blind test.
What it does do is stop your connections oxidizing.
It's also soft, so sprung connectors make better contact, but it only works if both connectors are gold of a decent thickness. A super-thin layer of gold on one connnctor is pointless, and if the other is a standard nickel-plated one there's even some chance of electrolytic action making the nickel side oxidise faster, if there's moisture around. If you're building stuff to milspec, gold contacts have advantages. For domestic hifi it's just bling, and a way to relieve suckers of their money.
As you quoted the son of an audio engineer as an authority in the article, I'll wade in. My father was also one in what's alluded to here as a golden age of audio fidelity - analogue audio in the 70s recording the likes of Thin Lizzy. He's still doing it professionally 40 years on. His hearing's gone downhill (just hit 60) but he's still got a better ear for a good mix than any of the youngsters coming into the business so he's not short of work.
These days he works in TV, but as a kid I used to help out recording everything from studio bands to the classical concerts he'd record in churches in his spare time. His guidance was that engineering and production is a job - and the focus isn't on audiophiles, it's on making sure what you're putting out is going to match what your audience is going to be listening on. Hence the old studio test of going outside and playing the mix on your car stereo at full blast. If it still sounds good, you've got it right.
So any audiophile argument about pop music is basically doomed to failure. You're trying to hear it as the artist intended... but the artist (and the engineers) put a ton of work in to make sure you could enjoy it the way you heard it. Calm down, and just enjoy the music.
Whilst my hearing isn't what it used to be (too many Black Sabbath/Motorhead/Uriah Heep gigs in a mispent youth), I can hear a perceptable difference between full fat and skinny when listening in my living room, not so much when in the garden and I think it is to do with the furnishings/stuff absorbing/echoing different frequencies/harmonics etc.
Surely the iTunes import preferences shown on page 2 of this article are for the importation of audio CD tracks (i.e. what format to rip/transcode the WAV to).
It seems a more fair comparison would be this:
1. Have the audio CD of music you want to compare
2. Set the iTunes importation setting to ALAC
3. Place the CD into the drive, import the album, eject the CD
4. Rename the Album title within iTunes (so that all tracks show the new Album name)
5. Go back to the iTunes importation settings page, change to lossy format / bitrate of choice
6. Repeat steps 3-5 as required for whichever formats / bitrates you wish
7. Listen to the resulting tracks
Assuming that ALAC is equivalent to FLAC, why bother with differently sourced FLACs?
My thoughts exactly. Why in earth did the journo introduce so many uneeded stages. It was ludicrous.
The same kit with a CD vs an mp3. Just using a CD drive in your PC vs mp3 is enough for the first round of testing. No messing around with speakers and amps just decent headphones. If you can haer a didfernce at 320kbps then take it further.
Totally pointless article.
When I had an OK stereo I had to re-rip all of my 196kbps to 320kbps as the OK stereo made the differences stand out like a sore thumb.
Then when I upgraded to a Cyrus set and B&W speakers I had to re-rip it all to FLAC becuase after searching for a problem in my new set-up for a couple of hours as the sound from my iPod classic sucked I eventually did a comparison with the CD and it was night and day different.
The old stereo was relegated to the bedroom to use as an alarm clock, and MP3s still sound fine on that system, you can't tell them from the CDs they came on.
I now have a Sonos connect connected to the preamp with a coax digital cable and it plays the FLACs from a network share. It sounds way better than I expected it to, to the point where the CDs were taken out of the living room the same day.
But after listening to Sonos speakers at friends and in shops I wouldn't expect you to see the difference from the 196 to FLAC, let alone the 320...
(Oh, and to answer the question about cables from someone earlier: I use the mains cables that came with the units, and cheap, but very thick, speaker cable as reccomended by the shop I got the speakers from - they commented that the expensive cable is for showing off, not for the music)
Thanks for that - first sensible comment in the entire thread. Just shows how people can get sidetracked. The author is using Sonos Play 1s - a 3.5" 'woofer' and tweeter in a tea-caddy sized plastic box.
Tip for the author: get decent speakers, then listen again.
This post has been deleted by its author
Unless you're paying huge sums for actual hi-fi headphones, as opposed to relatively huge sums to walk round wearing a mediocre fashion statement (sorry Beats users) or Bose, then I really doubt you can tell between FLAC and MP3.
But sat in the comfort of your own home, that's a different matter. As others have said, at least with FLAC the data is all there for a time when you can either afford something good enough to extract it (at the minute start with a Naim Uniti and work upwards) or when something new comes along to use it to the full.
With MP3 though and CD for that matter (the data is there but 99% of CD players can't extract it fast enough and decode it real time), the "missing" bits you may not be able to hear affect the room you're listening in, feed their resonance or echo or whatever into the music you can hear and hence how your ears hear the rest of the music. Another reason to make friends with your hi-fi dealer and borrow or demo speakers in your home.
*You have to be old enough to remember adverts for the Linn Sondek LP12
What Craig said: Storage is cheap. Rip to FLAC because you might as well. Transcode to another (possibly not yet invented yet) format as and when required. Go to pub. Simple.
Oh, it goes without saying to ensure CDs are clean and scratch-free before ripping. Perhaps a £10 disc cleaner would be a worthwhile and inexpensive investment?
Give me the best feeling from my music.
Storage is cheap enough these days not to need compression on a home system. I'll record to WAV if not originally supplied with the record (or download) of a release, then make a 320 for the mobile devices. Only WAV issue being attaching metadata, you just need to make a good naming system & file structure.
CDs are played in a CD player and ripped to 320 for the digital Traktor..
Perception is everything and you can go as far as your enthusiasm takes you, as important is a system that can move air rather than just tickle it. I also prefer a specific listening position and have the mid tweeter at ear level, I like the headphone like separation you get that way, just with added smile from the big speaker boxes.
"But how the hell can that be arranged and demonstrated?"
Strangely enough you're not the first to wonder what the data compression is doing to your audio. There is a great audio plugin by Sonnox that allows you to listen to an audio source via up to 5 streams simultaneously compressed through different codecs, bit-rates and depths, and switch between them. It's important to listen to bad encodings so that you can get a feel for the type of distortion you're looking for - once you can easily identify what mp3 artefacts generally sound like, it's much easier to spot them at higher bit rates.
I'm surprised that there's been little mention of AAC, since it's the default format on iTunes. I find MP3 artefacts are really noticeable and unmusical (when they are audible), whereas I generally find AAC just gets gradually softer and slightly muffled, which is far less apparent and invasive. A killer test for mp3 encoders is quiet hi-hats with reverb; It all turns into horrible swirly mush.
In terms of demo sources, it's possible to eliminate recording distortion effects altogether by using source material from pure synthetic instruments, like Pianoteq's Play, or most modeling synthesiser plugins, and you can then get audio source material that's never been though an ADC or DAC.
Nice to see how you detail all the steps and their attendant pitfalls.
The key phrase, though, is whether you're listening attentively. If you are, and you know the track well, then you'll notice all kinds of things. If you're not, you're unlikely to notice anything. The reason for this is that the brain uses lots of lossy compression techniques for processing audio and visual data. This is why we're so susceptible to optical and aural illusions – there was a good Horizon program on it a while back. But you will almost certainly notice the difference in the EQ settings. I have music on quietly all day as it helps me concentrate. I can nearly always tell when I forgot to switch the EQ back to standard for music from
Speakers should be able to move enough air and well enough damped not to sound harsh or tinny. But if the acoustics of your room are poor like a car or a bathroom you're unlikely to notice even that.
Want to really know what your various sources sounds like? Get some monitoring headphones on and listen to the quiet bits. Other than that go with what works best for you.
I can tell the difference with my battered Rega speakers & amp, with most recordings... Stuff like Peepshow from Siouxsie and the Banshees sounds completely different in a compressed format. Like another commentard said, storage is cheap, FLAC shrinks most stuff to 60% it's original size or better so it's not a big deal to rip CDs to FLAC. It's a great format, simple and effective. I guess a lot of folks are desensitized to compressed music, I don't resent that, it's what suits them, just let me carry on finding new music that sounds good to me. :)
I can usually tell the difference between lossy / lossless, however for convenience / compatibility I tend to use high bitrate mp3 and accept the difference.
It doesn't take high end equipment to be able to hear the difference (unless creative labs megaworks 250 speakers are high end)
Firstly CD's do have error checking "CIRC encoding" so you can get bit perfect copies, try dBpoweramp CD Ripper...
One thing you have to be very careful of is Psychoacoustics and how much the sales man is pleasuring your ego to get his commission... it also turns into how shiny it is...
the acoustics of the room and placement of the speakers can greatly effect the response of your system. so if your going to spend £50k on speakers make sure you spend £20k on your listening room...
also any white or pink noise from you pc will create Psychoacoustics Masking effects... (This effect is used in lossy compression like MP3)
also consider the time domain between different drivers eg tweeter and woofer and separate sub...
and after you have had a massive nerd out enjoy your music... :-)
I have had the privilege to be behind the odd console with some of the best monitoring around... and yes it does make a difference but then it gets mastered... and as someone said above music is mastered to the listener to help it sell...
if you want to burn some cash get yourself some sexy studio monitors... my Genelec 8260's do well...
The most annoying difference is MP3's harshness in high frequencies, particularly cymbal sounds and esses in vocal tracks. And these flaws can be heard even on mediocre equipment if you're accustomed to listening for them. The frequency roll-off at the top and bottom can't completely be overcome with EQ as claimed in the article, because some information just plain isn't there. However, these annoyances are so minor there's no point worrying about them except when you're doing some critical listening in a quiet environment. If you strictly listen in noisy places, or the car, or while exercising, etc., there's no reason to care about FLAC.
With that said, I could have a FLAC file that's the same size as MP3, obtained through equally convenient means, without the sonic drawbacks, why not have that instead? It's a bit like saying you can have a steak for the price of a hamburger and choosing a hamburger anyway. That's dumb.
" It's a bit like saying you can have a steak for the price of a hamburger and choosing a hamburger anyway. That's dumb."
On the other hand, given the choice between tough steak and moist, juicy hamburger I would chose the latter.
Source components matter less than final presentation.
Cost of the components does not always reflect the quality of the finished product.
I was downloading MP3's since napster first came out, before it was popular, when 128kbit/s was the best quality you could hope for, I knew it wasn't up to snuff, but for "free" it was good enough.
Fast forward somewhat and the pirates obviously decided broadband was pervasive enough to bump up bitrates to 192kbit/s, which was a clearly audible difference; fast forward again to today, and we have 320kbit/s being standard, a night and day difference from those early 128kbit/s files.
Since I have listened to countless MP3 files throughout the years, often "upgrading" my old copies with new ones, and following the progression of encoders such as fraunhoffer's, as well as encode engines like lame, I realized early on, the encoder makes the biggest difference.
The rise of the format spurred a lot of developers to write cd-ripping software, while others wrote mass WAV->MP3 encoders to fulfill the needs of users, obviously these time saving efforts didn't focus on quality, nor did the subsequent wave of combined ripper+encoder software.
I had tested 4 major encoders back-to-back at various points in time, the fraunhoffer being the defacto quality standard (hey, they had a whole institute behind them right?), and I can unequivocally say, most encoders sucked balls. Even at higher sampling or bitrates, many encoders either chopped off too many frequencies in their quest to shrink files, or used simplistic
hacks to excite the music to hide the flaws. I found the state of things to be quite terrible.
Then we also had the player wars, Sonique and Winamp being the popular ones with the kids, each played sound just a little differently. If you search the internet archives, you'll see countless press releases of improved sound rendering in the engines, each company extolling how awesome their next one will be. I theorize that in the early days, it was all about artificially re-inflating the deflated MP3 audio to sound like it wasn't butchered, and then as MP3 quality improved, the engines slowly removed most of these "enhancements" to be "more realistic".
Why was I able to hear the difference? Well, having worked very hard at creating my ideal speaker setup, it exposed all the flaws, imbalanced resonance, those cymbal artefacts you mentioned and more. You assert there's no standard for speakers, but there is, indirectly, they are called "reference" speakers. A quality german or japanese 3-way "reference" speaker like the ones I had are going to be big, and it may seem ridiculous paired with an underpowered computer at the time, but sound was important to me. They turned my computing experience, into a faux studio experience.
I should mention that I worked in the P.A system, and overhead paging field as both installer, and as a sound theory specialist. Many of my jobs revolved around taking a noisy building, like a pumping station for the city, or giant packaging facility, and fixing the mess some inept previous installer made. The typical problems were mismatched audio loudness, general direction and sound reflections, which all lead to one thing, blurbling... ie. you hear it, but it makes no sense, and for that they failed their safety inspections. This often meant I spent weeks optimizing for clarity and volume, followed by a new inspection (they all passed).
Much later, I also discovered a dirty trick: some CDs are produced with low quality sources, the compilations people buy are very often inferior to the original source CD. The only reason for this that I could see is that the author/compiler takes the music form whatever sources they can find, does some basic mixing (or worst case run them through an autotune/normalize function ugh), applies for the relevant publishing licenses, and then prints the disc... I have ripped and overlaid the tracks of identical songs in sound editors to find deviation, it was huuuuuuuge.
My advice, get the original CDs or as close to untarnished/remastered source (or lossless file), and make your own mixes if you still want to listen to them on CD, such as in a car. I don't trust any of the MP3 playing headunits to reproduce the sound that well, they still bias/enhance the audio unnaturally.
Ah, that brings back memories, of the first time I ever heard a CD. I walked into the lab and was handed a set of headphones (Sennheiser or some such) and told "listen to this". "My name is Luka", the original with just her voice, none of the remixed backing music crap. No tape hiss, no LP crackles. Stunning.
This post has been deleted by its author
Streuth, the level of comprehension of the technologies involved here has fallen a long way size the days when we would argue on the best Lame version or the merits of mp2 (sic) compression...
@billat29 parity?? CD Audio is the standard and is the same for CD and CD-R. The long block size Reed-Solomon error correction works pretty well perfectly for the purposes of this discussion.
FLAC: seems this is a total red herring to the discussions here. Just use uncompressed WAV or *any* lossless compression format for audio comparison. Why bother purchasing FLAC online?? Just rip a CD. And for best achievable results, rip with EAC using its checksum-based database for bit-perfect-by-consensus result. Mount the WAV and cue sheet using daemontools as a virtual CDROM drive and let iTunes perfectly rip the virtual drive.
Yes, yes for the best "cheap" quality solution, go for a pair of studio monitors, maybe active, or maybe with a simple power-amp.
For soundcard, just use a Behringer UCA202 - perfect USB to DAC, and only 20 notes.
I'm OLD! I have tinnitus. I do remember the 70's, I was working the concerts. I shot to many guns as a kid, indoors target shooting. I can't hear worth a crap. I have a giant high-frequency hearing loss, nerve damage. I carry a 128GB USB drive with my music on it when I go anywhere.
Meaning who really gives a shit except people who love to bitch about something. I certainly can't tell the difference, and I'm not alone.
"I carry a 128GB USB drive with my music on it when I go anywhere."
Do you have a USB port on the side of your head?
Perhaps you've left out some details of all the other necessary USB-to-ears interfacing equipment that you must presumably also bring along when you "...go anywhere."
Mostly I have equipment at both ends, with one exception. It's all your basic simple computer/receiver/amp/speaker setup. Since "I go anywhere" for anywhere for from 3 to 5 months at a time it's easier to leave stuff behind. The exception? I have a small FM transmitter that I carry, but I just might buy another, so I can sit outside and listen while I read.
Oh, maybe I can hear the difference...Way back at the dawn of time I bought a CD of a classical piece I really liked, The Moldau. Same recording, just on CD instead of vinyl. When I ripped the MP3's for that album I used the vinyl to do it, just sounded better. Meaning was it the media, or engineering, or method? With the exception of the CD player and/or turntable/cartridge it was the same equipment.
Bought a new amp recently, same speakers. Damned thing sounded like being next to some cars at a light. I had to go down in the basement and grub around until I could come up with an equalizer.
I use headphones. FLAC sounds better than everything else because it is what it is and not some mathematical equivalent that strips anything out. I can't speak to the Sonos issue, but I can tell you that the musicians I know all prefer FLAC to Apple's supposedly "lossless" format and to all mp3, yes, even at 320k. It just sounds better in ways that are tough to articulate without sounding like a audiophile nutcase. It's more natural sounding. I don't care why and neither do they.
I ripped about 150 of my CDs to FLAC.
The way I view it is this. Each album is 450 meg and £200 will get you a 6tb drive. So IF I had 13,300 albums, then I could fill up the drive and worry about picking MP3 over FLAC..
As it stands now, they are there and they can easily be compressed to go on the "to go" gear, but I know that the "source" CD is still there for me to go back to if I ever want it.
And no, a really high end mp3 rip vs FLAC to me - I cant tell a difference. I have paid for the CD, so I'll stick with that quality, even if I cant see it.. If the top speed limit is 70, why buy a car that can go 230?..
What I think is criminal is that "audio" publishers still try to push out lowered quality crap in the hopes that we might buy higher quality crap... It's a bit like some "BD Samples" that used to be on DVD, they were better than the movie. It is shameful.
If I'm going to synchronise all four Squeezeboxes and get the house shaking to Slash's new album at high volume I might as well do it properly. I get the argument for MP3s, but it's only really relevant on the move, where storage is an issue. Hence the a nightly script to keep an MP3 mirror of the FLAC files. Best of both worlds.
I find the article quite limited in scope, frankly. All the theoretical postulates can be argued about as much as you like (and I find some missing even from the comments). The empirical part seems to be inhibited by the particular procedure of ripping to a lossless format on a Mac (no criticism of Apple intended). Chris, how about you do some empirical research that goes beyond your own set of speakers and your own Mac and report on the results? Let's devise a few experiments you can do as a journalist.
1. Have you got an audiophile friend with high end equipment? Rip the CDs to FLAC and MP3 and listen to the originals and copies on his equipment and see if you can tell the difference. Intuitively, low end equipment has a bias in favour of lower quality codecs, so high end equipment makes a better experiment in this sense. Whether you can or cannot hear the difference, that will not tell you much about the reasons why, so move to the next phase.
2. Find a decent, professionally staffed audio equipment store and tell them you would like to get a reasonably good, better than basic consumer level shit, but not outrageously expensive audio setup. In my experience, what they will do (after some general questions and a discussion of what you are looking for, budget limitations, etc.) is invite you back with your own CDs. Ask your audiophile friend to help you pick a couple of CDs that are not completely lousy to begin with, and also bring a CD with FLAC and MP3 of the same music - ripped from the same original CDs - on it. They will line up a few decent receivers and a few sets of speakers and will start switching between them while playing the same tracks. My guesses are (assuming your audio perception is not completely degenerate): a) the same digitally recorded music played on different equipment combinations will sound completely different; b) some combinations - not necessarily the more expensive ones - will sound rich in texture and great overall while others will sound flat and poor. That's with the original CDs, no lossy codecs or anything.
[Disclaimer: This item is based on my own experiences choosing audio equipment. YMMV.]
3. Tell the store guys that you do listen to downloaded music and not just to original CDs and you would like to test how the various combinations handle that. Chances are that their DVD player will handle the formats natively. Try to listen to FLAC and MP3 on those combos that sounded great and on those combos that sounded poor. See if you hear the differences in either case.
4. If you can, bring your audiophile friend along for the experiment ("to help you make a choice") as well, as his ears are probably better trained. Don't worry if he likes a different receiver/speaker combination - this does not mean you have a hole in your head, it is very individual. The point is, whether or not he tells you that he hears a difference where you don't, it will be significant.
Report here. The results of the experiments above cannot be published in a peer reviewed journal (small sample, no objective measurements), but will be quite suitable for El Reg, IMHO.
128k is fine for the car, road noise kills any subtle sounds.
198k is fine for most people, even using fairly expensive equipment.
320k works for most, even on stupidly expensive headphones.
ANALOGUE IS STILL BEST THOUGH!!
In all seriousness, much of what we hear between 198k and 320k is subconscious; you dont realise you are hearing more, but you look back and realise you started listening to the music more often, and were getting more enjoyment out of it.
Of course, playing it through your PC speakers, it might as well be encoded at 16k.
(I am a poor, part time Audio buff).
I started moving over to FLAC last year when I got a new MP3 player. With storage being much cheaper it was possible to store a decent amount of lossless audio on a single device. I actually find that the difference is especially noticeable when driving. I'm not sure whether it's a dynamic range thing or what, but I'm able to listen at a much lower volume without the car noise onscuring the music than when playing MP3s.
When listening at home it's not so obvious, but MP3 can sometimes introduce unwelcome and disturbing artifacts. Even on a far from "hi-fi" setup, I was listening to "Equinoxe" a couple of days ago and had a big WTF moment. I'm pretty sure it was a 192kbps file but it's since been deleted and replaced with FLAC.
Does anyone remember Barry Fox in Hi-Fi News & Record Review claiming that you could get better quality audio from a CD by using a green marker to draw around the edge of the disc? That was the day I never took any articles about audio seriously again. Even when it correlates with my own opinion I have to double-check! :-)
Even on an old Blackberry 9780 with £40 Sennheiser headphones the difference between FLAC and MP3 is noticeable. It all depends on the music being played.
This article is funny. It doesn't seem as if the writer wants to find a real result, just trying to defend his Apple\Sonos\iTunes preference. Seeing a set of Beats headphones in there just made me laugh and not actually bother reading the thing. I knew there would be more interesting comments than detail in the article.
Personally I have gone through those wasted hours of ripping 300+ CDs to MP3 320kbps, and then I replaced the HiFi setup. Annoying. Spent £1500 and MP3s then sounded underwater to me compared with the CDs. This is how I discovered the wonders of FLAC and have never looked back. I'd rather spend a little bit more on my storage space than compress my music to death.
I also find it hard when I listen to someone else's music system where they are playing MP3s. Or the worst crime of burning a CD from MP3s.
What is encouraging is that FLAC support is spreading. Even Sonos now handle FLAC (as long as you ignore iTunes). Every non-Apple phone plays FLAC. It is starting to creep into online sales (just waiting for Amazon to go FLAC).
Last month I finally swapped my car stereo to a FLAC one and yes the difference is noticeable, even with the road noise. I find it surprising the difference when an MP3 album comes on as the music goes back underwater when compared with the album that was on before
It's all personal choice. Life would be so boring if we all agreed and we the same.
Can el Reg organise a blind test? Given a set of listeners who claim to be able to tell the difference with their named tracks. Place them in a room with your reference audio kit of choice and play FLAC and MP3 versions of the tracks and ask the listeners to rate which was which for multiple tracks. Given all other things are equal it will only be the audio format that changes so over enough tracks we can see if there is a statistically significant result.
'Can el Reg organise a blind test? Given a set of listeners who claim to be able to tell the difference with their named tracks, poke their eyeballs out, and bandage them up as best you can. Place them in a room..."
There. I fixed it for you.
We can't take any chances that they might peek.
poke their eyeballs out
Well, the customary practice is called a double-blind test, but I don't think that's what they have in mind. They usually just make sure that the people setting up the test don't know which source is which so they don't give subconscious clues.
Then again, for El Reg, who knows...
iTunes does not use MP3 but AAC (Advanced Audio Codec) or m4a (mpeg4 audio) which is an evolution of MP3, but far superior in many ways.
I agree at high bit rates with AAC it requires an experienced listener with good source material and an good playback system to tell the difference. However one of the issues in listening to Non-lossless audio long term is the mental fatigue it can cause. This is particularly evident in cinema sound tracks for FX laden movies.
My significant listening experience has shown that what is acceptable on a middle range system, when played back through a quality system very quickly results in listener fatigue. So although the average listener may not be able to consciously tell the difference in A/B comparisons, this does not mean that there are no subconscious effects such as increased cognitive load due to having to re-create the missing sound information.
Is that it chops the sound into 24ms (from memory) chunks and performs a fourier transform to move the information from the time domain to the frequency domain. Within that frequency domain it's easy to filter and/or scale the frequencies you don't have bit rate to transmit, and to remove completely those parts which the perceptual encoding model selected claims cannot be heard.
When you replay it, the reverse occurs, and the remaining data is converted back into time domain (I've omitted details of other compression coding on the data itself as it's not relevant) and replayed. Two things have happened now: you've lost what the coder thinks you can't hear, or had it reduced in precision, and you've lost the phase information in the original signal, which may or may not be significant. The theory is that the human hear can't hear phase information; I'm not so sure, but...
A second point is that the mp3 standards *do not* define the codec. They specify that a datastream like *this* shall produce and output *thus*, but they don't say how you get to the datastream. Different codecs make different decisions on the perceptual coding models; some are audibly different with the same algorithm on floating point or integer processors - particularly at low bit rates. It's likely that similar effects pertain on the decoder.
Third point: the DAC on a phone or laptop is unlikely to be anything other than the cheapest the maker could get away with. A high noise floor, less than stable clock, cheap filters (apropos of which, many sound cards (in days of old - I don't know if this is still true) used switched capacitor filters for antialiasing, driven from the sample frequency. An excellent idea - things track automatically. But I came across some cards which also had a high pass filter at the bottom end to stop LF noise; some of those cut off at 300Hz or higher for 44.1k sampling).
Assuming for the sake of argument that FLAC/ALAC is truly lossless - that the bit pattern going in is exactly the same as the bit pattern going out, then the way to test the comparison would actually be to ignore the FLAC coded signal completely and find something clean in 16 bit audio - say a CD rip done with a good CD, ideally not one that's compressed to death as so many are - and get it in a WAV PCM file. Use that file with the codec of your choice to create an MP3 file at the bitrate of your choice; decode that using the decoder of your choice to another WAV PCM file.
Now play the two WAV files. If you can hear a difference, there is one; if you can't, it doesn't matter. The ADC doesn't matter *if that's what you normally listen through* since it affects both WAV files the same way. For completeness, FLAC encode and decode and listen to that, with the same logic.
If you really want to get silly, use an audio editing file like Audacity to subtract the two files (you'll have to delay the WAV file a little to get the timing right) and see how much signal is left. That's what the codec thinks you can't hear.
Another commenter already tried the Audacity bit, subtract-mixing the encoded file over the lossless one and noted that, especially at high bitrates, the resultant delta is generally very small, like a tiny warble of noise along the centerline of the graph. Admitted, there could be some spikes along the line where perceptual coding can't handle things so well such as at high-frequency noise (eg. cymbals), but is says something to the "pretty good enough" factor.
>Third point: the DAC on a phone or laptop is unlikely to be anything other than the cheapest the maker could get away with.
Some versions of the Samsung Galaxy S4 had Wolfson DACs, and the LG G2 is said to be good, too (and LG contributed to the Android Open Source Project the ability to play 192Khz 24bit FLAC natively)
That's a good point; although the theoretical digital processing may be undetectable to human ears, one very seldom finds cheap chips which perform to theoretical perfection. Thus the use of 24 bit DACs in good CD players, to decode 16 bit encodings; because the bottom few bits on cheap DACs are worthless, so if you want real 16 bit accuracy in real life, you need to spec a 24 bit chip.
If you can actually mp3s to decode, you have a way to detect problems which it's much easier than listening intently. Set up (or find a high schooler who can still do hardware) a good opamp with the original signal going in the plus and the signal run through the coder and decoderb into the minus, then adjust the levels to get the best cancellation, I.e. the output volume of the signal should be nearly zero. (Feed the opamp output into your listening system, I forgot to say) obviously, anything still audible will be distortion, caused by the digital process.
The last time I tried this with an outboard PCM encoder and decoder system I was really badly surprised at how much grunge was audible, and more importantly how really irritating it was. Admittedly, however, that was 30 years ago. I'm sure that since then, manufacturers have learned to make things worse.
Lest you think I'm an old crank, let me introduce to to the Aphex Exciter; a recording too used to deliberately add irritating distortion to the music, to make it really pop out at you. (Yes that's where Aphex twin got the name) When solid state took over from tubes, everybody raved about the brilliant highs, until they were proved to be distortion. Exact same thing when CDs took over from vinyl. Just add some popping and fizzing whenever your recording hits a high frequency, and it sounds awesome. But you get fatigued quite quickly.
Early on you state that "I knew less than someone devoted to hi-fi with a dedicated listening room, and carefully selected and matched components often costing upwards of a couple of thousand pounds, sometimes much more."
this is clearly bollocks.
All audiophiles listen with their wallets and are mentally ill.
(fucking transmission lines!)
you mean coathangers my son?
your opinion in MUCH more valid and useful than theirs.
But how are we to distinguish if what the person perceives as difference is really difference and not placebo effect (here's a challenge: can the person tell between 'recognize speech" and "wreck a nice beach")? That's why you need multiple people, to average out any bias inherent to an individual.
The superior sound quality of FLAC is quite noticeable, but only to those who've had the opportunity to enjoy quality audio equipment, or to those who are familiar with the sound of a live performance. Most today think the iPod is the reference standard, and so, cannot observe any benefit to FLAC.
IMHO= what we are really looking at here is the development of a proprietary encription system that is end-to-end digital with no Analog point of presence and thus uncopyable by any means other than hooking up to the voice coil of a sealed Sonus speaker, I-Pad, I-Phone, Beats headset, or a HD video monitor - the only way to copy encripted material in this new developing form is 1x speed (not a very easy way to steal large quanities of program material that would normally be taken at 14x to 30x speed of a cd / dvd disk).
Recording Artists that talk of new mastering systems are very much aware of what Apple could do with a end-to-end encription system (somthing the RIAA could never do)= prohibit copying of artists work by design of the mastering equipment...
caveiat= i buy my music and videos at Amazon and don't get involved w/ trying to get 'free stuff'...RS.
Pseudoscientific wank from the pages of audiophile magazines.
Also on Twitter: @wathifi
“With these [speaker stands], our kit sounded ponderous, with a flabby low-end” http://tmblr.co/ZSi1ar1RoDr9o
“We like the warm, full-bodied and gentle sound that these slim wooden stands bring out of our reference” http://tmblr.co/ZSi1ar1RerMT0
“For best results have the arrow pointing in the direction of the flow of music. For example, NAS to Router...” http://tmblr.co/ZSi1ar1H7i3Kn
(Mailed to me:):
In your article on lossy vs. lossless audio you said, "Everything between sample points is lost." Please read up on the Nyquist–Shannon sampling theorem at Wikipedia, which states in part, "no actual 'information' is lost during the sampling process," given certain sampling conditions. This is scientific truth.
Please read the very long and detailed web page at https://xiph.org/~xiphmont/demo/neil-young.html for more information on lossy audio reproduction.
If you've heard of the placebo effect you will understand why people believe lossless audio reproduction MUST be better than lossy reproduction, and hear it as such. But the Canadian Research Council have conducted extensive double-blind (very important) listening tests. At sampling rates 256 Kbps and above, with a good encoding, only golden-ears individuals (that's not you or me or most people) can hear ANY difference at all.
Cheers .... Chris
Religion and this early in the morning... needed extra coffee...
Why are we still banging this old drum, and why hasn't it been settled one way or the other by the most simple means? Which is.... SAMPLE THE OUTPUT.
Hook up the relevant measuring devices, play the CDD/FLAC/ALAC/MP3 and compare it. Are there differences? Yes, yes but not relevant? Repeat all along the signal chain to the end, ie, where you'd plug in the speakers. Now measure what was supposed to have come out vs what actually came out. At each step once more.
Willing to bet that you'll end up figuring that the differences as you moved down the chain became much bigger than they originally were. And that's not "quality difference", just different artifacts produced...
p.s. anyone claiming their signal chain isn't changing any little bit of the input is per definition an "audiophile". Delusional with deep pockets.
Can't say that it's true in the digital world (although it probably is) butnin the analog recording world there's no doubt that the distortions from the electronic components were orders of magnitude less than those from the mechanical components, I.e. the vinyl/cartridge system and the speakers; and of those two, the greatest single source of alteration in the signal was the interaction between the speakers and the listening environment.
Sometimes it doesn't matter a damn. Recordings of Fats Waller will make me dance and smile, despite the limited dynamic range and technical clarity of the original 1930s recordings. However, the arrangement of the music and the role of the band (piano, gypsy guitar, trumpet, vocals) are more than clear enough to impart the emotion of the music.
Big speakers. We don't just listen with our ears. We can feel music through our bodies at louder volumes. In addition, we can sense frequencies below 20Khz through our skeletons. Witness deaf percussion players, and the presence of church organ pipes at sub 20Khz frequencies (Stephen J Gould notes this in his essay 'An earful of jaw' since our inner-ear bones evolved from our jaw bones). Generally, I find that with larger speakers, it is possible to hear the music clearly, and at the same time have a conversation without straining.
My advice when testing audio components for purchase (does anybody do that anymore?) was not to use some track you liked; you'd enjoy it if it was some drunken chimp playing it on a kazoo. But if a system could make you pay attention and admire something that you were kind blah about, that was a good system.
As stated before, MP3 defines a data-stream and the general approaches to generate it. The exact behaviour/performance is dependent on the psycho-acoustic model used by the encoder. Two 128kbps encodes of the same source material by different encoders may sound different.
The lossy compression of MP3 is analogous to the lossy compression of JPEG/MPEG for images/video. Some types of source material are much harder to compress successfullyat low bitrates than others, and the artifacts will be correspondingly more visible (audible).
Your Sennheisser CX300 earbuds will really muddy the sound compared to studio headphones - try Sony MDR-7506.
If you don't know what the MP3 artifacts you'd be listening for, take one piece of source material, and compress it variously to 256kbps, 192k, 128k, 96k, 64k. On any clean source material that hasn't been mastered to digital mush (and dynamic-range-compressed to within 0.5dB of its life) in the first place, the artifacts at 64k should be glaringly apparent. As a very general rule, you'll be listening to for more subtle versions of the same at the higher bitrates. Unless there's something very wrong with the encoder, 256 or 320kbps should sound almost indistinguishable from the source. That's the point!
The early Dire Straits Brothers in Arms CD album (not necessarily more recent remasters/re-releases) had quite a phenomenal clarity. Quite some years ago I made some tests at various bitrates from this material, and it was educational - unfortunately I've now lost both the examples and the original uncompressed material!
With popular music especially, the sound quality depends on what the engineer was mixing on, and presumably what kind of equipment he was mixing it for. Early Beatles albums, for example, sound great on good systems, but with early stones albums, the better the system the worse they sound. Either they were designed for crappy 60s car radios and cheap record players, or the engineers blew it.
It's all very well quoting details of the equipment characteristics, but the only way of directly testing this is to set up and carry out a well-designed randomized double-blind test. Opinions from experts don't carry much weight because of biases that come about from them knowing what they are listening to, and these may be unremovable by conscious action. Call in an experimental psychologist (I can recommend a few good ones from UK academia who have acted as expert witnesses in related areas), and let them carry out the experimental research.
Dave Gorman has it right here. We are constantly bombarded with the marketers need to buy the bestest, newest, coolest, highest-spec,... MOSTEST! Do you enjoy a TV program more now it's in HD? No. Sure you can see the actors pimples better and some nature shots are better but the enjoyment is no more or less. The same holds for music. Music is to be enjoyed for how it makes you feel. It's an emotional response to a musicians emotional outpouring. The fidelity of the recording is irrelevant if it still makes you feel the same when you hear the recording as it did when you heard it live. If you feel the need to spend £1000's on "real" HiFi then good for you, but don't tell me that I'm missing out because I am not. I am probably getting more out of the music than you are because I am interested in what the artist has to say, not how precisely it was reproduced. Yes, a certain level of fidelity is required but I have recordings from my cassette tape trading days that I still enjoy, even though in some cases I now have a way higher fidelity copy of the same recording.
The article is so nontechnical it hurts.
Anyway, as everyone have already said, the actual moving parts of audio system (headphones and speakers) alter (or butcher) the sound significantly more than high-bitrate compression, clock jitter and other scarecrows of “Hi-Fi” crowd combined. The same goes for recording process and studio mixing. As a wild guess, 9 out of 10 people in the world listen to music on a hardware that masks any difference in quality. And no matter what losslessly lossless 4800 KHz 1600 bits/sample 100% authorized version of “Death Magnetic” you get, it would still be a pathetic joke compared to (oooh-pirated, oooh-lossy) Guitar Hero version.
The reasons for lossy and lossless compression are different. Lossy compression provides a wide range of possibilities to sacrifice some quality when bandwidth (or storage space, as they are the same from certain point of view) is scarce, from “very small piece of sound data that still can be deciphered as human voice” to “artifacts mostly unnoticeable by most of the listeners while still being x times smaller”. Depending on situation, you aim for some point at this range. Lossless compression just aims to save original data (and maybe save some space by not wasting it on millions of zeros, while we are at it). It's just a transparent functionality so you don't have to put every track in the ZIP file and unpack it manually all the time.
Because storage space is mostly cheap in most situations at the moment, people tend to miss the fact that lossy compression is not about providing the absolute best quality, it's about balance between quality and size. If you think about that, 320 kbps CBR MP3s are just the same overblown abomination as “100% quality” JPEG images. Moreover, MPEG-1/2 Audio algorithm itself is more than 20 years old, newer ones are better.
As for the question in the headline, there are audio samples that produce noticeable MP3 artifacts even when encoded on 320 kbps, you can take them and tell the difference. They are no secret and probably would be on a first page of a relevant google search. In general everyday use, bitrate requirements differ between genres and complexity of music. For example, for some parts of some metal tracks (when blast beat, fast shredding on distorted guitars, synths chorus and vocal are combined) 256 kbps is just a little bit not enough — at least that's what I hear on the blind test on consumer equipment that is far from perfect.
Pointless article just to get the comment section lively. It starts and ends with the same idea "I don't know".
Thing is this subject has been covered all over the internet by people who do actually know what they are talking about. Of course if you are playing the music out of a lousy system then the quality difference between the formats isn't going to notice. And if all you want is a bit of background music then the small differences in quality wont matter.
If you want a proper Hifi setup and you spend some money on a nice DAC and set everything up properly (ie dont use itunes as your player!) then you will hear the difference for sure.
My advice for anyone interested is to go and read about this on a hifi website where you will get much more informed information and proper testing. Not just "I gave it a whirl on my Sonos" lol.
Proper double-blind tests using high-quality equipment and experienced listeners have been done, and the usual conclusion is that 320kb/s FBR or 256kb/s VBR MP3s can't be reliably distinguished from the originals even with critical material.
I've done the same test myself using high-quality headphones and the studio masters of the band I used to play with -- which do include multiple instruments with a lot of high-frequency content, cymbals, snare etc. -- and came to the same conclusion. So I'm happy to use 256k VBR -- with the LAME encoder, not all encoders are as good!
Anyone who makes claims like "the difference is night and day" without showing that they can reliably tell the difference in a double-blind test -- which they can easily do themselves! -- should be put into the same box as those who promote gold-plated mains cables and green pens for colouring CDs.
And don't get your information about this by reading hi-fi websites, most of them (with a few notable exceptions) are as guilty of promoting the same audio bulls*it as the gullible audiophiles who buy it...
Its all subjective but as someone whos been playing the guitar for over 20 years and has been through a large collection of amps & cabs, i can say the biggest impact on sound 'character' generated for a given set of inputs are...:
- the amplifier
- the speaker
This is the same if its MP3 files, CDs or lossless digital formats.
However like any other process if you put shit in, youll get shit out, lossless or not.
If you cant get a good amp do your ears a favour and buy a pair of Grado headphones - sr60s will do.
Forget your overpriced chavvy gizmo laden B&W, Bose, B&O and Beats etc phones get something that really makes a difference & that says you actually know what youre talking about.
I agree with you but dont forget the DAC! This is digital music so the conversion before it gets to your amplifier is very very important. Without a good quality DAC you wont be able to hear the difference between these files. something that the author of this article completely misses.
The difference between MP3 and lossless is nothing to do with what is removed. It's everything to do with MP3 adding frequency components that are not there in the original, and (even worse) which have no harmonic relation to the original notes.
Listen to somerthing very simple. A solo flute is perfect. Or a singer with a "pure" voice and an acoustic guitar. If you have a musical ear, the difference betwewen compressed audio and non-compressed is painfuly obvious. I can obtain pleasure listening to acoustic music on hardware way below the audiophile grade. It's failings are to attenuate some very high and very low frequencies, and to introduce some harmonic distortion, i.e. notes harmonically related to the source. This latter changes the timbre of the sound ... some folks actually prefer the "warmth" added by a valve amplifier, which does so more than even a low-budget transistor amp.
But non-harmonic added noise is just that. Noise. Disgusting atonal interference. It doesn't change the timbre, it interferes with your appreciation of the music. It does so with any fidelity of post-DAC amplification, not just the highest. Amplification with any approximately linear transfer curve cannot add non-harmonic components. Sampling will "alias" ultrasonic frequencies back down to lower audible ones, but that is avoided by filtering the inut so that there is effectively zero in the pre-ADC stream at much above the sample rate to get aliased down below 10kHz. Compression is simply evil.
BTW this same phenomenon is why FM sounds so much better than DAB, even on a crappy portable receiver. (Oddly, the horror is far less with speech than with music). Of course, the compression on UK DAB is particularly awful, which doesn't help.
You average Studio, generally has a minimum of two sets of mixing monitors. A high end, flat frequency monitor and a shitty set of speakers typical of a consumer unit.
On First pass when mixing levels, adding compression, FXs, etc the high end monitors are used. The producers/engineers ears are trained to hear individual instruments, so they concentrate on ensuring that that they sound good (not necessarily an exact reproduction of the instrument per se).
Once the mix is completed it is mastered down to a stereo image. First using the high end monitors to get the EQ/compression just right. -This is the mix you really want to hear but never really get the chance.
Then it is passed to the shitty monitors, where additional compression/EQ is used so it sounds reasonable. The difference between these two mixes is night and day. Listen with your eyes closed and you get fantastic stereo separation, cymbals ring beautifully, you can hear (if you are in the know) what kind of guitar amp is being used and feel the real force of a good singers voice.
So what we get delivered on a CD is actually a pretty crap representation of the recording anyway. All the dynamics will have been compressed away and "loudness" will have been added. The last Metallica album being a prime example (the guitar hero mix was actually better).
Now your average listener does not know the difference between a Marshall amp or a Fender amp,
a shure microphone or an Audio Technical microphone. So generally it does not really matter.
Also hi-fi gear is not designed (despite what the manufacturers say) to give a perfect flat frequency response (studio gear is), it is designed to sound "nice" for what ever material is thrown at it.
Listen to the crap mix through the quality studio monitors and it will sound horrid, with horrid top end and far too much bass.
When CD's first appeared they were fairly hideous, plastic sounding things that illustrated the weaknesses of the low resolution, my first CD player had an 8-Bit DAC in it to exaggerate the problem.
Then in the 90's CD players started using bitstream which was basically oversampling or anti-aliasing and their sound quality was transformed. That said the vinyl purists had already made their mind up and and advancing technology wasn't going to change that.
When MP3's appeared they were generally quite low bit-rate, 128k was the norm, maybe 160 if you were lucky. They did all kinds of odd things to the sound, a highly compressed pop track sounded fairly normal but an orchestra sounded like a Casio keyboard. FLAC was preferable as it retained the CD quality. MP3's improved as the bitrate went up, now usually 320k and you can't tell the difference, however the FLAC purists are the new vinyl-types and have already reached their conclusions.
It's a pity that we're still using MP3, MP4 has much better compression enabling decent HD audio with sensible file sizes, there are few devices around these days that don't have sufficient CPU power to play them.
I've been converting my music to MP3 since before the ipod, and was studying communications theory at that time. I have had some pretty good setups to listen on. Even on the most low quality equipment I found 128k noticably poor. For background music it is ok but not for actively listening to. I have found VBR to be the best compromise, as some parts of a track have less treble detail (where disk space can be saved), and some parts of a track have loads of treble detail competing with each other. Will I re-rip my music to FLAC?..maybe some of the well-mastered good quality albums. However I also have some albums in DTS surround format, and for me that is the future..not stereo. Each speaker with a channel, with an appropriate bit rate for that channel. The sub would need the least, the mid-ranges would only need to deal with the mid-range frequencies so there would be less interference and harmonics to worry about, and the tweeters would be where the directional information comes from, and again, mastered so that they only get the info they need. I am not suggesting dropping higher frequencies at the mastering stage on an instrument by instrument level, but having a fully integrated mastering model where the input was all original sources, and the output was 7.1 surround with each of the 7 channels intended for a speaker with certain characteristics. No point in the sub in my car getting anything above, say 300Hz, and no need for my tweeters to have to worry about anything below say 1KHz? If the tweeters then only got the info they needed, then VBR could allow for big savings despite keeping the resolution for higher frequencies high because higher frequency detail isn't needed 100% of the time, but when it is needed it is important.
I love it when Audiophile and Techies get into a hissing fit over music recording quality and which format rules and who can hear the difference or not. Why don't you just convert all your MP3s to FLAC to improve the quality, then turn the volume up to 11 on your virtual DJ App and sit back and enjoy the music played over the speaker of your iPhone / Android Phone...
Are you really this clueless or just trolling?
An MP3 cannot be converted to anything better. It's already ruined. Conversion cannot put back what was lost, or remove the garbage that was added, when a CD or better bitstream was encoded as MP3.
Or do you mean, re-rip your CDs, losslessly? In which case why not say so? The problem is that if you download music, you may not have access to any CD-quality sources.
Yep - totally Trolling. I'd thought the 'Volume to 11' might have given that away...
Look there are so many *points of loss* from music source format (meaning file/record/CD) in modern HIFI to your ears and our ability to perceive the sound (including your age/occupation) that I have pretty much given up fretting over the fidelity of what I'm listening to. If you want to listen to music without any loss of quality, learn to play an instrument - better use of time than sitting in a darkened room with £££££'s of Audio kit worrying about whether you should have encoded that 1930's MP3 recording at 128K or 160 or should have gone lossless.
If you have it in digital format it's already lost data. If it's non-digital then it is degrading over time and use.
So learn to play an instrument.
I've always gone off a simple basis like this:
My vinyl sounded clearest (until I scratched a few too many 12's on Vestax PDX 2000)
My tapes sounded shyte but generally gave me access to free music
My CD's sounded better that tape but not as nice as vinyl
My mp3's at whatever rate sound ok but shyte compared to cd & vinyl
My FLAC files don't exist as can't be arsed to get everything in that format
MP3 players are cheap (or early ones were) and so the sound was always going to be weak especially with manufacturers cheap headphones
Is my view way too simplistic by any chance?
Yes - too simplistic
As I explained above the problem with MP3 is that it adds crap that is not harmonically related to the source music. Added interference, in other words. Noise that was not represented at all in the original data stream.
Which is why compressed MP3s sound shite even on cheap reproduction systems. Only digital compression can do this. (OK, perhaps an off-centre loudspeaker scraping on its magnet can also have this effect ... but that's at such a high level, you immediately diagnose the problem and scrap the speaker).
Slight non-linearity in an amp can create only harmonic distortion. not new noises. The weighting of frequencies that are not harmonically related to the input remains zero.
Squeezer has it. Scientific testing in the world of high-end hifi is virtually non-existent. There's no point asking the opinion of an audiophile who swears he can hear the difference if his speaker cables are connected back to front (with the lettering on the cable running from speaker to amp).
Nigel11 is right that your best chance of hearing a difference is on very simple music where any "unmusical" digital artefacts will add a perceived harshness to the sound. But I try hard not to listen to the sound and just enjoy the music. I suggest you do the same. It's bound to be coming through better than it did on pirate radio in the 1960's and that was hugely enjoyable regardless!
I'll happily listen to music on a cheap FM transistor radio (just as long as it's amp is passably free of crossover distortion and its speaker isn't scraping on its magnet). Or listen to a pre-war recording that happens to be the best-ever interpretation of a work. What comes out is music, plus harmonic distortion that is melodically indistinguishable from the music (just a change of timbre), and minus some bass and treble (again a change of timbre). Oh, and some interference crackles that are easy to ignore if not repeated too often or too loudly. I'd far rather a crackle, than a digital silence.
Music after MP3 compression has added - repeat ADDED - non-harmonic distortion. That means random if faint notes of random frequency, that are not related in any way to anything on the original. If you have a musician's ear (ie perfect pitch or perfect relative pitch, and a deep love of harmony), this rapidly takes away much of the enjoyment in the recording.
As I've commented, it's not a problem with speech, only with music.
My mother is 87. Last time I visited her, her music sounded horrid. A quick inspection while she was out of the room revealed she'd accidentally switched the receiver from FM to DAB. I switched it back and said nothing. A couple of days later she asked me if I'd done anything to it "it sounds much nicer since you left ... I was going to ask you about it but I forgot." QED. DAB sounds crap, even if you're 87. The best you can say for MPs is that they are less awful, in the same way that a mozzie bite is less horrid than a bed-bug bite.
Bits/sample can be just as important as samples/sec.
For 'pop' there is little dynamic range (everything is LOUD) but for lots of classical (particularly 19th century) there are very large volume changes involved.
Icon: you need a good speaker system to tell the difference
If you're trying to spot imperfections and differences in sounds, this is easier (and much cheaper!) with high-quality headphones (Beyerdynamic, Sennheiser...) than speakers, unless you're fortunate enough to do what high-end studios do and have very expensive monitors in an acoustically-treated room.
Fenton, your point about studios might be correct in ones churning out chart trash, but it's not in the ones I've used. Most critical mixing is done using high-quality monitors (e.g. ATC), the NS10s (or whatever) are used to check that it still sounds OK on lower-fi gear, for example where low bass notes will be inaudible. If you're then doing a demo track to be low-quaility-streamed on the web more compression might then be applied to get the average level up, but this isn't the version that would make it onto an album.
Nigel11, if you're so convinced that MP3s always sound crap and add all these non-harmonic tones, why not put your money where your mouth is and try the test I suggested? (uncompressed vs. 256k VBR MP3)
Squeezer, that is mainly what I was talking about. In general people who buy non pop music still generally go for the CD route.
It's the compression that gets on my goat.
Any studio I've been into (including an invite around Abbey Road), does not use directional snake oil cables. Yes they are low capacitance low resistance cables when it comes to the pure analogue audio path with very good connectors (for reliability reasons).
It is a pity that in the digital domain the cut-off frequencies are 20Hrz and 20kHrz, I'm a great believer that there are certain frequencies we feel which contribute to the overall mood of the music that are not audible.
I've never heard a classical recording that can quite capture a real concert. Or a recording that can capture the mood of a live rock performance with 18" bass bins where you can feel the punch of the bass drum against your chest. No matter how loud and high quality the recording and the equipment.
Sent to me and posted here anonymised:
you can tell the difference, like everybody can.
Take your favourite CD, that does not contain metallic rock or other music with a lot of white noise.
More like acoustic instruments, singing etc, without a lot of cymbals and other white noise producing instruments, will do perfectly
rip a MP 3 of that CD
play the CD and the MP3 player simultaneously, so you can alternate between the two, and hear the difference if any.
You *will* hear the difference, on a somewhat reasonable HiFi installation.
Sonos play 1 or play 3?!
To be honest if you are happy listening on a play1 then you arent going to care. Its a crappy little speaker only good for a kitchen or shower. Almost the same is true of the play 3. Play 5 only just begins to get serious but still to my (and my wife's) ear isnt great.
So if thats what you are happy listening to then you arent the kind of person who can or should be judging this. Not a criticism some people are more bothered by it than others, thats all.
For the record wife and I own a pair of sonos units but both are connects that feed better amp and speakers. We trialled a play 5 and decided not to buy one even for the bedroom, although for that room it would suffice.
... then it's all about the ears.
20 years ago whilst at college, I splurged a whole years student loan on a HiFi (Linn speakers, Arcam CD player and Creek amp) and though I'd be able to listen to my music with a whole new level of appreciation. Admittedly it sounded way better than any equipment I had before, but the one factor that money can't do anything about is the standard of my hearing.
Fast forward to today, and I suffer with slight tinnitus and a related drop in my ability to hear high frequencies (12kHz+) in one ear. Despite this, I still enjoy my music. I rarely get the opportunity to listen to the HiFi these days however, so listen to most of my music via a Sansa Fuze and Sennheiser CX300 earbuds. My music is usually ripped to MP3 using LAME, set to encode VBR with a quality value of 2.
So, a couple of years ago, I set myself a blind test - I selected half a dozen tracks that I know well, and loaded them to the Fuze in 3 formats - an uncompressed WAV ripped straight from CD, the WAV encoded using my usual MP3 settings, and the same WAV encoded as a 128kbit/s WMA file. All files were put into specific playlists so that all 3 versions of each track were together, and I listened to them using shuffle without looking at which was which. After an hour of testing, I concluded that I could hear no difference whatsoever between the tracks, not even the crappy bitrate WMA's. I've repeated this test while checking which version was playing, and they still sounded the same. Obviously the difference may be noticeable if I used my HiFi, but that's not how I listen to my music these days. The one thing that is clear to me is that there is no right or wrong answer regarding the topic of audio compression, though by the number of comments that always follow any article about audio formats there are clearly a lot of people who feel strongly one way or another.
Depending on your point of view, I'm either very lucky or very unlucky, but for me I find VBR MP3's suit my needs.
Back in the 90s I ripped all my music collection to 128Kbps MP3 to fit on small SD cards of the time, and with no psycho acoustic model due to lack of of hardware FP on the ARM. I always wanted to get around to re-ripping it in much better quality, but I recently found out my hearing drops off a cliff after 8KHz, so not really much point.
If you want to compare FLAC and MP3 audio quality effectively you are going to need a properly designed and controlled experiment. If you can get some known good source recordings and a studio monitor setup you could then encode the files in various formats and conduct blind tests with several listeners. Of course audio quality is a highly subjective thing as evidenced by claims to "hear" a difference between two cables carrying a digital signal, but blind testing should filter out bias.
"Well if you cannot tell the difference maybe FLAC isn't for you ?"
More seriously it so depends on the circumstances in which music is listened to.
On a car stereo or through earphones on a busy train commute, arguing over the best quality lossy format versus a standard lossless format is probably pointless. Background noises will probably have a noticeable impact on your listening experience, regardless of the quality of the speakers or earphones involved.
As soon as better listening conditions are taken into account, the difference in quality will become more obvious. To what extend will also depend on the musical styles involved, how trained your ears are to the instruments played or how often you've played the songs in the past.
MP3, AAC, OGG VORBIS were always driven by convenience and concerns over size. Now that broadband Internet access is more common and that portable audio devices have bigger/faster storage space, audio quality can become relevant again... for the ones who care.
I've never been partial to overpriced Bose equipment. I find less expensive alternatives. And while they won't completely cancel out background noise, they do make a nice difference in a noisy environment like inside a conveyance. I personally keep a pair for air travel.
What is this nonsense about "my audiophile friend says he can definitely hear a difference..."?
Are we cave men? Has everyone forgotten that the concept of 'science' exists?
It would take 20 minutes to do a proper double-blind test of this claim. How about people stop quoting a bunch of nonsense and do a test.
"Campbell says: "Some music recorded and produced in the '80s seems to suffer from a weedy presentation with little bass."
Maybe because the source material was encoded in Dolby, which always sounds thin and reedy unless pumped thru a decoder? Perhaps someone needs to test this...
Lots of technical speak here, well over my head in most cases.
One thing I will add (that'll never get read being 5 pages deep in the comments :P) is that some poeple have listening habits that will in a way raise their perception of differences - and of course the genre argument (punk recorded in a shitty run down studio - mp3s will barely make a difference - classical/jazz/electronica that's been recorded and produced with the highest quality in mind from the start will almost definitely benefit from FLAC).
What I'm getting at is that someone who listens to tracks or albums over and over again - usually fans of a specific artist - will begin to perceive even the slightest differences as they will have "trained themselves" to expect a certain noise at a certain time. My personal favourite for this is Aphex Twin - very highly regarded electronica. Some outlets (e.g. bleep.com) sell their music as mp3, WAV, FLAC, and in some cases 24bit FLAC (I won't pretend to say I can tell the difference between 16bit and 24bit FLAC, but MP3s and FLAC, of music I *know*? Yep. definitely).
Anyhow - it's an argument (or a discussion) I've had with many, many people, as soon as they realise I'm putting 200-500Mb per album onto my phone - then using my phone in my car for music. It's not like I'm going to convert all my FLACs to MP3 just so I can hold more on my phone because the listening situation means I'm not getting the maximum benefit - but even in teh car, if an MP3 threw up a warble or some colours some noise in a way I'm not accustomed to I'd notice. Not in an "this offends mine ears!" way, more of a "huh, that doesn't sound quite right" kind of way.
Tl;dr (aural) beauty is in the (ears) of the beholder. Or something.
Biting the hand that feeds IT © 1998–2021