beaTunes News

Tuesday, April 15, 2014

Does 24-bit audio matter?

beaTunes3 logoYou have probably heard of Neil Young's Pono music player. Supposedly it sounds awesome. However, scientists have doubts. The usually very knowledgeable people at Xiph say: Unfortunately, there is no point to distributing music in 24-bit/192kHz format. Its playback fidelity is slightly inferior to 16/44.1 or 16/48, and it takes up 6 times the space.

The argument against higher sampling rates is clear. And using lossless formats also provides clear advantages–even if it seems like people cannot distinguish between a quality-encoded AAC-file and a CD. But what about the individual sample resolution? Does 24-bit really provide no advantage over 16-bit?

Imagine a very quiet recording, really only using the lower 4 bits out of 16. Will it not sound worse than the equivalent recording using the lower 12 bits out of 24?

I admit, I assumed it will sound far worse. Then I tried the following.

To test, whether it sounds any different, I took an ordinary mp3 file and converted it to 16-bit Wave using sox.

sox rumour.mp3 rumour.wav

Now that we have a 16-bit version, let's create a very quiet version, that uses only the lower 4 bits. That's equivalent to shifting all samples by 12 bits. This is equivalent to multiplying each sample with 1/2^12≈0.00024. As it turns out, this can be easily accomplished using sox:

sox -v 0.00024 rumour.wav rumour_quiet.wav

Now we have a very quiet 16-bit file, really only using 4 bits. Let's do the same for a 24-bit file. To do so, we first convert the 16-bit Wave file to 24-bit. And yes, I know, this does not magically increase the quality, but bear with me for a second.

sox rumour.wav -b 24 rumour24.wav
sox -v 0.00024 rumour24.wav rumour24_quiet.wav

At this point we have two files. One with 16-bit resolution, one with 24-bit. Both are very quiet, but we assume that the 24-bit file sounds better, just because it supposedly still has 12 bits to represent the signal, while the other one only has 4 bits. The fact that we converted from 16 to 24 bits earlier does not matter, because we down-scaled the signal so much by using the -v flag that any disadvantage we might have had is gone anyway.

Well, judge for yourself: rumour_quiet.wav and rumour24_quiet.wav.

Can you tell the difference? I guess the appropriate question is: Are you able to hear anything at all?

That's right. The result is hardly audible, even with great amplification. Now, if you can tell a quality difference between something you can hardly hear and something else you can hardly hear, you astonish me. So unless this little experiment is somehow flawed (please point out any errors!), 24-bit audio does not matter.

Labels: ,

20 Comments:

Anonymous Anonymous said...

It's not that the difference is not hardly audible.. it's that there is no difference. It's like drawing a 5cm line on A5 versus A4 paper. you will still see the entire segment.

April 15, 2014 at 6:54:00 AM EST  
Anonymous Anonymous said...

You are taking an mp3 and adding "data" to it? It doesn't work that way. You need to have music that was recorded in 24b. Then, it is up to your equipment to be able to play it well, and up to your ears to tell. If you are using $2 computer speakers, then do not expect to hear anything. Very high quality DAC's and speakers will surely leave you with a difference. In auto-terms, the source files is equivalent to high performance fuel. Don't expect a honda to care if it has premium fuel in it. A Ferarri, it probably will, an F1 car, it sure does. What you are doing is taking a Honda, painting it Ferarri red, and then claiming that it isnt going any faster, therefore it's a scam.

April 15, 2014 at 7:14:00 AM EST  
Anonymous Anonymous said...

Do you have a 24-bit or 16-bit soundcard? If it can only handle 16-bit audio, then both of your .wav files will end up truncated to the same stream. Undo the shifting by applying sox -v 4096 on each one. I did, and the quantization noise on the 16-bit .wav is incredible, even on my laptop speakers.

April 15, 2014 at 7:52:00 AM EST  
Blogger beaTunes said...

@ferrari-guy:
Once you reduce volume as harshly as I did, any differences in quality that you had with the original file - be it 16 or 24 bit, will be gone anyway.
But that's not the point. The point is, that the little bit of audio that's left after reducing it that much is hardly audible on its own. Imagine trying to hear it in the presence of a signal with normal volume. Absolutely impossible.
Had I used a 24-bit file instead of the mp3 - the result would be the same: A hardly audible signal.
It simply does not make a difference.

April 15, 2014 at 8:04:00 AM EST  
Blogger beaTunes said...

@24/16-soundcard-guy:
Yes, you certainly hear a big difference when you increase the volume again. I bet.
But that's not the point.
The point is, that you cannot hear the signal unless you crank the volume to the tilt (and hope that you don't forget to turn it down again, before you play a regular sound file, because it will blow your head off).

April 15, 2014 at 8:07:00 AM EST  
Blogger beaTunes said...

@A4-vs-A5-guy:
It does not matter whether there is a difference or not.
With the audio being so quiet in either case, you cannot hear enough for it to be relevant. Especially not, when you imagine other content at regular volume in the same file and you simply don't want to turn the volume up all the way...

April 15, 2014 at 8:10:00 AM EST  
Blogger Breakthrough said...

It's possible there's no difference because sox uses an internally higher bit count with dithering - from http://sox.sourceforge.net/sox.html :

> For example, adjusting volume with vol 0.25 requires two additional bits in which to losslessly store its results (since 0.25 decimal equals 0.01 binary). So if the input file bit-depth is 16, then SoX’s internal representation will utilise 18 bits after processing this volume change. In order to store the output at the same depth as the input, dithering is used to remove the additional bits.

It would be more appropriate to repeat this test with dithering disabled, as then I am sure there *would* be a noticeable difference in fidelity.

April 15, 2014 at 9:00:00 AM EST  
Blogger Breakthrough said...

It's possible there's no difference because sox uses an internally higher bit count with dithering - by changing the volume with the -v flag this is done automatically, which helps to keep the original dynamic range. From http://sox.sourceforge.net/sox.html:

> For example, adjusting volume with vol 0.25 requires two additional bits in which to losslessly store its results (since 0.25 decimal equals 0.01 binary). So if the input file bit-depth is 16, then SoX’s internal representation will utilise 18 bits after processing this volume change. In order to store the output at the same depth as the input, dithering is used to remove the additional bits.

It would be more appropriate to repeat this test with dithering disabled, as then I am sure there would be a more noticeable difference in fidelity. In other words, using the -b and -v flags is **not** the same as simple multiplication/bit-shifting!

April 15, 2014 at 9:09:00 AM EST  
Blogger beaTunes said...

The point of the post is not really, whether there is a difference in quality.

It's whether you can hear anything at all.

But, yes, if it was about the quality, I guess I should have started with a clean 24-bit file, to please all the people who think that it's relevant (even when right after decoding we are reducing the volume by a factor of 2^12).

And yes, I should have paid more attention to proper dithering.

But since the post in the end is not about quality, but rather about whether we at all have a chance to perceive a difference in quality, I don't think it matters all that much in the end.

April 15, 2014 at 9:11:00 AM EST  
Blogger Unknown said...

I don't think you understand what you are talking about, when you make a sound quieter you are removing the headroom from that audio and thus you are removing any way of comparing the two bit depths with human ears.

For example if you take any two sounds and lower the bit depth to almost nothing and make them extremely quiet then the human ear won't be able to tell the difference between them, you can visualise this as looking at a graph of a spectrum and when you increase the volume the spectrum gets larger and you can see more detail.

This is why on guitar pedals when you run 18v through instead of 9v you can hear a clear difference, it is because the 18v gives the volume and other effects more headroom to work with. The more headroom the more spaced out the detail is and the easier it is for the ear to pick it up.

This is why your test is invalid, you aren't actually comparing 16 bit to 24 bit. Also you seem to understand that if you make any sound have -infinity volume then you won't be able to compare them so why do you post an article like this?

Also what was the bitrate of the original sounds? Have you checked that it was actually produced at the bitrate? Have you listened to 16 bit vs 24 bit audio that you have recorded yourself? Have you listened to the difference in sample rates as well? Have you considered that you will loose different amounts of definition in that sound conversion that you are performing from 16 bit to 4 bit and from 24 bit to 12 bit?

April 15, 2014 at 9:25:00 AM EST  
Blogger beaTunes said...

"thus you are removing any way of comparing the two bit depths with human ears."

That's exactly the point of the post. When it's down to the extra 8 bits, 24-bit recordings have to offer, human ears don't have a chance.
I'm not saying 24-bit does not make any sense when recording, producing, mixing, etc. - it certainly has its place. But for consumers who listen to the final product, it does not really matter, since you can't really hear it anyway.

April 15, 2014 at 9:37:00 AM EST  
Blogger Unknown said...

Once again I don't think you understand you can't hear those 8 bits because you zeroed the volume, you cant hear anything if you zero the volume. If you listen to just 8 bit music then you can still hear the 8 bits even if it sounds terrible what you have done is cheated by just lowering the sound till its barely audible on normal settings.

April 15, 2014 at 9:42:00 AM EST  
Blogger beaTunes said...

Rob, supposedly 24-bit excels because of its greater dynamic range. 24 bit instead of 16 bit. Greater resolution, greater quality. I don't doubt that at all.

But what's the difference between a 16-bit recording and a 24-bit recording?

Right, it's that you have 8 bits more room to represent the audio, thus greater dynamic range.

Now by reducing the volume by 12 bits, I created two files that emphasize the potential difference. That is the part of the audio that should really bring out the quality of the 24 bit version.

I assumed that 24 bit audio would sound a lot better, because it simply has more bits to represent the low volume audio. And it probably does. I don't know. I was unable to check, because the audio was too quiet. Another reader pointed out that when you re-amplify with -v 4098, you do hear quite the difference.

But what I tried to communicate was, that the audio represented by those last 4 bits in 16-bit audio and the last 12 bits in 24-bit audio only make the tiniest difference *when played back at regular volume*. In fact, you can hardly hear them unless you crank your volume knob all the way up. But who listens to music like that? No one. Most people like to keep the volume knob somewhere where it's pleasant to hear all their music.

So, potentially, 24 bit could sound a lot better for quiet pieces, but in practice, for the listener, it does not matter all that much. That's at least the conclusion I draw. I don't think that's cheating.

And that aside, I don't mind 24 bit audio at all.
I just don't think we really need it or that I could tell the difference.

April 15, 2014 at 10:06:00 AM EST  
Blogger Breakthrough said...

Also @beaTunes, regarding your response to @Rob Copel: I totally agree there.

Any human listening to a normalized 16-bit audio recording most likely won't be able to tell; however, if you want to do any further work *with* that recording, or if precision comes into play (e.g. what if you are not measuring a microphone, but another analog instrument? 24-bits provides a significantly greater range of values), then it might be worthwhile moving to 24-bit as a standard for everyone, as one can always play back 16-bit audio without any changes in the original file.

The bigger issue (even in terms of filesize, let alone hardware/software difficulty) in my opinion is trying to justify huge sampling rates like 96 kHz/192kHz. Not only can the human ear not hear much above 24 kHz, negating the requirement for anything above 48 kHz as a sampling rate (given the appropriate low-pass filters), but even a 16-bit 96kHz file will be larger than a 24-bit 48 kHz file - and the latter *will* have some measurable difference (which can be amplified, like you show in your article). What we *will* never notice - unless shifting the frequencies or working with the audio itself - is an improvement using a higher sampling rate (as a sampling rate of 48 kHz provides perfect reconstruction of frequencies up to 24 kHz).

April 15, 2014 at 2:11:00 PM EST  
Blogger Unknown said...

@beaTunes The bit you aren't getting is that bit depth isn't volume, you have two things that make up the so called audio quality, sample rate and bit depth, now neither of these has any relation to volume, bit depth is the amount of volumes that that single sample can choose from (By increasing the bit depth you are adding more precision rather than more range, imaging a ruler with only centimetres on it and then when you up the bitrate you get millimetres as well) and sample rate is how many times you can pick a bit depth per second. With this is mind it doesn't matter what volume you are playing at it's if you can hear the difference in the sound.

This is where your test fails because you are saying that bit depth is related to volume and by removing bits you are removing volume. If you wanted an accurate test get the two tracks and flip the phase on one of the tracks so all of the same material will cancel out and you will be able to hear the difference between the two tracks but once again I need to know that your conversion is lossless and the original track was in 24:192 or 24:96. Personally I always record at 24:96 because if I turn it lower than that then I can clearly hear the difference in the sound.

But then again it doesn't matter that much for the masses because they listen through bad headphones and download it off iTunes witch auto downgrades the quality if you aren't with a big record label. The main thing is that there is a audible difference between 24 bit and 16 bit.

April 15, 2014 at 7:03:00 PM EST  
Blogger beaTunes said...

Rob, I am aware that bit depth isn't volume.

But in reality, people listen to music at a set amplification volume (otherwise things like ReplayGain would be senseless). And when the chosen amplification volume is constant, bit depth suddenly *is* strongly related to volume. Only in a setting where you can tweak the digital audio as much as you like, and don't have a set volume, the two aren't related. This would be the case when mixing in a studio. But imagine listening to music on your home stereo. A regularly loud song some up, and then one that really uses only the lower 4/12 bits. You would be hardly able to hear it.

I appreciate your flip-phase/cancel-out suggestion to create a true diff-track. However, the whole point of the post was not to come up with a quality comparison of 24-bit to 16-bit (even though it starts out like that). You are right, it falls way short of that. Instead, I tried to point out that the comparison isn't necessary, because *at regular amplification* most people are unable to hear the difference anyway. And writing this, I realize, I should have made the amplification point much clearer. Then perhaps we wouldn't have this exchange at all.

I appreciate that @Breakthrough again pointed out that my whole argument falls short in a studio setting, where the extra bit depth really makes a difference.

I also appreciate your argument that it really does not matter when listening through bad headphones. That's kind of the direction my whole argument went.

April 16, 2014 at 3:13:00 AM EST  
Blogger Unknown said...

Exactly. If he isn't recording at 24 bit then it's apples to oranges. Good post.

April 18, 2014 at 10:07:00 PM EST  
Blogger Unknown said...

DJ BeaTriss (on Soundcloud) and I have worked for three years on this question of sample format and density (8,16,24,32bit from 128,256,320,40k,44.1k,96k,and192k. We have spent years listening on JBL LSR 4328 reference monitors with subs in our self built audio studio. There is no question the fidelity increases with higher bit rates and sample rates. But, like with anything else, you get diminishing returns in the fidelity that the human ear can distinguish with increased sample rates and bits.

You also start to run into practical issues. Mashing 4 songs at 96k 24 bit requires a whole lot more processor speed than mashing 4 songs at 256 16bit. Though a One year old macbook pro with 8 gig memory seems to have no trouble mashing 4 24bit, 96k songs, I do notice our 6 y/o macbook pro is operating at what appears to be around 60-80% processor capacity to do the same thing.

Another is that our preferred equipment, the Traktor Kontrol S4, won't play 32 bit at all. I believe it will play music sampled at 196,000.

Size matters. The typical MP3 or AAC song only requires 10 megabytes (maybe 20 tops) storage space or less. So about 2-3 megabytes per minute of music. Music at 24 bit 96,000 uncompressed will run right around 32 megabytes per minute. We run uncompressed AIFF so at least computor doesn't have to uncompress the file before playing it. However when we dabbled in 32bit 192,000 uncompressed AIFF I remember some 6-7 minute songs exceeding a gigabyte. That is obviously completely impractical for anybody except possibly Noisia or Deadmau5 (the big boys...)

And of course the larger the file size the longer it takes for audacity, platinum notes, etc. to work their magic. I remember Audacity taking routinely 30+ minutes to run a routine clip fix on some extremely dense files.

And this totally ignores that in almost every venue you have essentially no control over speaker placement, and the whole place is just one big mess of reverberating, echoing, muddy sounding music, no matter what you start with.

In the end, like many have mentioned above, it will come down to a balance. The artist/musician has to produce the end result that he/she is happy with, given the practicalities of cost and time. Currently our favored format is to start with the best quality recording we can find (which is often Beatports higher fidelity stuff) then we work on each song in audacity/plantinum notes/etc until we get the sound we want. And then these are then saved in 24 bit 96,000 AIFF uncompressed format. And there is so much more that this to produce an effective unforgettable 'show'. Lights, stage persona, interaction with your audiance, play just as big or bigger role in entertainment

That's all I can tell you at this point. Only time will tell us if it was worth the effort.

April 23, 2014 at 9:59:00 PM EST  
Blogger beaTunes said...

There has recently been a listening test comparing 24-bit audio with 16-bit audio.
The result: People can't tell the difference.
Details are at http://archimago.blogspot.de/2014/06/24-bit-vs-16-bit-audio-test-part-ii.html

December 10, 2014 at 7:58:00 AM EST  
Blogger DanBB said...

It doesn't make a difference when listening on consumer systems at home. But you can definitely hear the difference in the clubs with huge sub woofers and high quality tweeters playing at really loud volumes.

February 27, 2020 at 7:06:00 AM EST  

Post a Comment

<< Home