Can someone please explain to me why and how a 24bit/44.1kHz file is considered high resolution

How so? Much of it seems very reasonable. For example the idea of using a $16,500 power bar does not have much in the way of scientific and engineering support, fake “white papers” aside.

Now if one has the extra $$$ to spend and one feels that the price tag is fair and the results are worth the money, fine with me just don’t tell me that the results are clearly audible or even measurable.

And he’s not saying that good cables aren’t needed but rather that good cables are available for far less money than some would believe possible. That is far from insane and very reasonable.

I’m sorry, $16000 for a power strip is insane in my opinion.

I think you’re in agreement … it wasn’t evident that you were referring to the $16k extension lead. It’d look more at home in an undertakers and that’s where I’d be if I bought one.

Sorry James I misunderstood you. I thought that you were referring to his overall post and did not realize that you referring to the power strip part, which by the way is totally insane! Again, sorry about that.

By the way that’s why I like to use the “quote” function, helps to avoid these kind of misunderstandings.

I agree, or you could ask.

Good idea! I might try that! :smile:

1 Like

I don’t listen to 16-bit music or 24-bit music. That is the music source, sure. But my signal path includes a 24-bit to 64-bit conversion prior to a convolution filter, then 64-bit conversion to 32-bit as it’s fed into my DAC. Even without convolution, the DAC upsamples to 32 bits for D-A conversion.
So if engineers choose to use 24 bits as it better ‘preserves’ resolution as they are futzing with it, wouldn’t it make sense to start at 24 bits before signal processing through this signal path to my DAC, then my ears? Why would I want to ‘compromise’ and start with lower bit depth? If the engineers don’t want to make those compromises, why would I?

That’s 64bit float.

Roon does its DSP using floating-point arithmetic. And modern Intel processors (the kind Roon Core runs on) are 64-bit native, so 64-bit floating point operations are faster than 32-bit.

As to the conversion from float back to integer, there’s no particularly good reason not to truncate back to 24 bit. (I can think of a reason, to do with the headroom adjustment; it’s just not a particularly good reason.)

Your DAC uses one bit depth (usually 32 bit) and one sample rate (usually 768KHz) to do the DA conversion. By design this must be at least as high a bit depth and as fast a sample rate as the maximum it can accept.

No.

For recording and mastering, you need more bit-depth than the “target” bit depth of the final product. If the target bit depth is 16 bits (2 bytes), then you need 3 bytes (24 bits).

After you’re done adjusting levels, mixing and applying DSP, you truncate to the desired bit-depth.

Because, when the engineers are done, the “extra” bits contain nothing but garbage. (Which, fortunately, is at so low a volume that you can’t hear it anyway.)

The only reason the engineers didn’t do what they intended (and truncate to the intended 16 bits) is that audiophiles are perennial suckers for the “more is better” fallacy and will pay more for a 24-bit recording than they will for a 16-bit recording.

9 Likes

Thank you @Jacques_Distler! Very informative and very clearly written.

For that price, you could build a battery pack, with trickle charger and (assuming you’re a klutz with a soldering iron and couldn’t read a circuit schematic to save your life) pay someone to do the necessary mods on your stereo equipment to run them off of said battery pack.

And you’d still have money left over for a nice little vacation in Greece.

Good thing your spouse doesn’t read this forum … :slight_smile:

I think the smart money bet is just objectively true.

I’ve got a very hi res system and can tell the difference between red book and hi res. Well I thought I could. Then I realized that want I was hearing was the difference between a well miced recording and the rest. I listen mostly to Jazz and Classical and film tracks. Not saying the hi-res offerings are not good, but just that a good recording red book will sound much better then a 24/192 of a poorly recorded session.

2 Likes

This is one area at least where I think there is no disagreement.

Recording and mastering every time.

.sjb

2 Likes

This is easy to verify.

Any audio-editing application (I use Audacity) will let you downsample a 24/192 recording to 16/44.1. Either solicit the help of a friend, or load them both into an app like foobar, and do an ABX test to see if you can tell them apart.

Anyone who’s ever tried this can predict what your result will be.

1 Like

It might be that the 24 bit recording gear has better amplitude resolution than 16 bit equipment. And there is nothing in between since digital equipment likes to think in bytes. BTW often 16 bit equipment might not in practice have full 16 bit resolution. 24 Bit recording gear certainly doesn’t have 24 bit resolution. The technical limitations currently might bring slightly over 20 bit real resolution.

2 Likes

Nobody disputes the benefits of recording in 24 bits. I think the discussion is about 24 vs 16 bits as a distribution medium.

I’ve read right through this thread and, as usual, the main arguments are about whether we can capture and reform the sampled audio waveform accurately with 16-bit and, therefore, do we really need 24-bit resolution?

I see exactly the same arguments around the 44KHz sampling frequency, namely is it worth capturing frequencies above 20kHz (the usual ‘only bats, dogs and small babies can hear higher’ is often trotted out).

All these arguments, to me, miss the point which is that, once we turn from an analogue method of recording, storage and reproduction, all the rules about signal to noise and frequency response become fairly irrelevant unless, of course, the system truncates what we need.

With digital what matters most is not how accurately the waveform is stored, given that we’ve adopted the 16-bit/44kHz CD Red Book system as some sort of hi-fi standard, but what happens when we record, store and reproduce the waveform using a system that operates at this resolution.

By this I mean, what do the digital systems do when recording, storing and reproducing an analogue waveform that impinges on what we hear?

For example, the question regarding sampling frequency should not be about whether recording frequencies beyond 20kHz matters but what do analogue to digital and digital to analogue converters do at and beyond 22kHz (based on a 44kHz sampling limit)? The answer is that the distortion and noise produced beyond 22kHz is horrendous and must be heavily filtered if we aren’t going to scream in agony and drive our amplifiers crazy! This also draws into question what distortions the brick-wall filters at 22kHz introduce too? These are arguments not pertinent to this thread, but I just want you to think about them.

Because when we talk about bit depth we should be looking at the problems involved in digital recording and reproduction

Let’s start with the basics. When recording engineers were limited to analogue tape they would frequently push levels to +3dB simply to minimize noise at the expense of mild distortion. With digital, I believe, engineers no longer monitor recording levels at all closely in the belief that the digital system is noise free.

I’ve been in recording studios where recordings are made peaking at -20dB just to make sure that sudden unexpected transients don’t hit the 0dB level. Anything above 0dB is pure, nasty distortion and so must be avoided at all costs.

If we say that, for orchestral music, we are looking at 60dB dynamic range, then we need a digital system that remains clean for 80dB overall. Now this should be within the range of 16-bit systems but the problem is that the lowest digital levels a prone to quantization noise, which is why engineers quickly added dither so that we would never hear this rubbish.

Add to this the problem that digital systems are not as perfect as we think they are - for example most 16-bit systems actually have trouble working to better than 14-bit accuracy when you measure them - and there’s an advantage to using a greater bit depth.

On replay modern, high resolution DACs are often working at 32-bit mathematical capability in order to minimize distortion and noise and deliver superior accuracy. These DACs seem to be able to work slightly better when given a 24-bit file than 16-bit (and I’m ignoring DSD here, for the moment, which has different advantages and problems).

In essence the real problem is that we are using numerical coding methods to try and record, store and reproduce a complex analogue waveform, so the usual rules accorded to the analogue markers of signal-to-noise and frequency response don’t apply.

I’ve spent time working with digital engineers and they tell me that what goes on inside ADCs and DACs isn’t as clear cut and accurate as the layman would like to think. On that basis the higher the bit depth the more likely it is that we can capture, store and reproduce the musical waveform without noise and distortion intruding.

So is 24-bit ‘high resolution’ compared to 16-bit? My answer would be yes. However that was not the question. It was closer to ‘is a 24-bit/44.1kHz file truly high resolution’? That’s not so clear on the basis that the noise and distortions introduced by the 44kHz sampling may well outweight the advantages of the greater bit depth.

So, does that mean we should strive for 24/96 when it’s available?

Yes!
10chars

But only if we have already bought the recording on vinyl, CD, SACD, DVD-Audio and 24bit/44.1kHz. After all we are not humans, we are simply WALLETS!!!

1 Like