Resolution is a technical term. If the definition for High Resolution is “anything better than CD 16bit/44.1kHz”, then any format that gives more resolution than that – no matter in which direction – is High Resolution. The meaning or believe that differences in resolution must result in an audible difference where the higher resolution always wins, is completely by the masses – fed by the marketing of a zillion dollar industry. The fact that a higher resolution under certain circumstances / for certain applications actually makes a difference is only water on the mills of the marketing people and a prayer for the believers.
But all this doesn’t mean that there can’t be audible differences. Science has still no complete understanding of peoples senses and their limits. While technical engineers are constantly trying to understand and implement the latest scientific knowledge into better products (algorithms, formats, speakers, …), marketing and the people are usually just not interested in science and research results – unless it’s good for water or prayers.
When applied to ADCs and DACs the term resolution normally refers to the size of each sample in bits, which is directly related to the number of distinct digital values that can be represented. So it’s quite natural (to me at least) to describe a 24-bit audio signal as high resolution, irrespective of its sample rate (e.g 44.1, 48, 96 kHz). Of course you can argue about whether the extra dynamic range that 24-bit samples give you is worthwhile or detectable by human ears/brains, but that’s a different argument to whether or not 24-bit audio is technically higher resolution than 16-bit audio (whatever the sample rate).
That’s the root of the problem - 24 bit does not give extra dynamic range to the audio, it simply gives one the ability to record audio that has a higher dynamic range than can be recorded by using 16bits, of which there aren’t any that I know of once you exclude jet engines and atomic explosions. But to most people those two statements are equivalent and the marketing people know this and explicit it.
Here’s a simple test. Take a 24bit audio file and measure it’s dynamic range. By the way Roon does this when it analyzes a track. Now dither that 24bit audio file down to 16bit and measure the dynamic range. The two values will be exactly the same.
Fundamentally, that’s missing the point. In your initial post you said
If you ignore that, the rest of the discussion is meaningless†.
16 bits offers you 96db between the loudest sound you can record and the ineluctable quantization noise of any digital recording. With noise-shaping, it’s more like 100db over most of the audio band, rising sharply at high frequencies (which most of the denizens of this forum can’t hear anyway).
24 bits gives you 144db between the loudest sound you can record and the quantization noise. But that’s irrelevant: no one has yet created audio hardware with an SNR of anything close to 144db. Really good audio equipment will get you about`100db.
So there’s no point to pushing this one part of the audio chain way below that. 16 bits, with noise-shaping is already pushing the limits of what the rest of your gear can achieve.
† Maybe I should explain why it’s meaningless (since, apparently, that’s not obvious to you). The ratio between the loudest sound and silence is not 96db or 144db; its ∞ db. Decibels are a logarithmic scale, and log(0)= −∞.
But, in the real world, you never get absolute silence; there’s always some source of noise. So what’s relevant is the ratio between the loudest sound you can record and the noise.
WIth digital audio, one source of noise (quantization noise) is always present, so that’s what those number (96db and 144db represent).
As I’ve explained, beyond a certain bit depth, quantization noise just isn’t the relevant source of noise to consider.
Excuse my math …
Still, who defines the mV step that each bit quantifies? The L in LPCM says it is linear, suggesting that other schemes are possible. The PWM scheme of the Laserdiscs handled the issue in an analogue way. So simply attributing 24 bit to 144 dB is just one way of looking at that.
The peak voltage produced by your DAC is up to its manufacturer. And how loud a sound that corresponds to can be changed by you by simply turning the volume knob on your (pre)amp.
No.
The (logarithm of) the ratio between the loudest sound that can be encoded and the quantization noise is a (linear!) function of the number of bits per sample. This has nothing to do with the (totally arbitrary) absolute magnitude of either.
I’m sure the smart people here can tear this to threads, but this is the way I think of things. There are two parts to something being “hi def”, the number of steps the value can have, and the number of values per second. The number of steps = bit depth, the number of values = samples. In the pic, it’s easy to see with fewer steps the mean curve (which is what we hear) is more “guessed”, for example the first jump is quite steep, but did it really do that? The mean curve says yes, but more samples may prove that wrong and produce a more realistic mean curve… and realism is what we’re after. Same with level, it’s approximated. Was it 8 or 9? well, it was actually 8.6, and with more bits I can represent that value correctly. 16/44 is the marker point, anything below that is lossy, anything above it is high def.
All digitized music is lossy, it cannot contain the infinite amount of information of the real life sounds.
But i agree that the RedBook standard of 16bits 44.1kHz have become the measuring stick. Anything potentially able to store more granular information in either the amplitude domain or the time domain can be considered high rez.
CD is StdRez and anything stored in a less granular format would be considered LowRez.
Basically, when discussing two channel PCM:
<1411kbps = LowRez
1411kbps = StdRez
>1411kbps = HiRez
‘High-Res’ audio is just complete BS
Just get a Chord M Scaler, with a Qutest/TT2/DAVE, and you’ll never, ever be bothered by it again.
But seriously, it’s all about the accurate reconstruction of the analogue waveform. That’s where 24-bit can be advantageous. But the M Scaler makes all that irrelevant, reconstructing the analogue waveform perfectly from a 16-bit stream (or as perfect as it can be, at this moment in time).
To my ears, and in my system at least, 16/44.1 sounds as good as 24-bit audio.
@mikeb Getting that hook out is going to hurt. Score one for the marketing people. These graphs are complete nonsense. Digital audio is compromised of points along a curve not steps.
Wouldn’t it be more accurate to say that digital audio is just a stream of numbers, more or less by definition? Whether it becomes points on a curve or steps (if that’s a distinction you want to make) depends on how the numbers are converted back to analog. For example if you use a NOS DAC with no reconstruction filter the analog output will be steps.
Here is my simplified understanding, in other words it’s more involved than this but I’m only trying to show that there no discreet steps.
There are samples taken at a given time frequency (the sample rate) which contain the frequency and the amplitude of the audio being sampled. The analog wave form is reconstructed by connecting each point by a line and not by a step. There are simply no steps, none no matter type of DAC.
That’s my point: whether you get steps or a curve depends on the DAC implementation, it’s not an inherent property of the digital audio representation.
To be more precise, the samples are simply the value of the audio signal – such as voltage from a microphone – at the sample times. “Amplitude” and “frequency” are properties that might emerge when a series of samples is considered as a continuous signal, not properties of an individual sample. (That’s my understanding, anyway, for what it’s worth.)
I agree but I still do not understand how this results in anything remotely resembling steps.
Glad that you linked to Archimago since he is so often right on the money with respect to marketing ■■. Here’s an example from that linked blog post:
While this might be true in some circumstances especially in the old days, who ever said this was universally applicable? After decades, with the maturity of sound technology, assuming the collection of measurements was done appropriately, an alternate explanation is just as likely: “If it measures bad and sounds good, maybe your hearing isn’t as good as you think.” I think it would be hard to argue against this perspective when a subjective writer waxes poetic about stair-stepped squarish waves coming out of an old NOS DAC as if there is some special, non “digital-sounding” quality. From my perspective, these squarish waves are as “digital-sounding” as it gets! Another example of this might be how forgiving our ears/mind are to the effects of jitter; objectively, it takes quite a lot of timing irregularity before most people would be able to put their finger on an audible problem. Yet think of all the times various reviewers have claimed that cables of all things affect jitter significantly or manufacturers seem to think femtoseconds are audible…
And finally using a piece in a way that it was not intended to be used, e.g. with the filtering turned off, may result in some odd behavior like square waves.
Yet those graphs describe the myth @Jazzfan_NJ refers to in the opening post. The output from a DAC doesn’t look like the rectangular waveform associated with the usual marketing material. Rather, each sample contains the data needed to accurately reproduce the original analog waveform. These data (the sample size) also contain other information including dynamic range and frequency content, i.e. the frequencies and energy present in a signal.
But to the OP. Surely bit depth corresponds to resolution?
The difference between 16-bit resolution, which contains any one of 65,536 unique values, and 24 bit resolution is huge: 24-bit resolution has over 16 million unique values. And as you say, the practical effect of this is dynamic range in the signal although the upper limit for 24-bit you mention is theoretical.
In recording, mixing and mastering this is advantageous. But does this really matter in consumer audio playback? Is high-resolution audio better?
Or unnecessary? I think this is the point being made. Perhaps a more relevant discussion would be about the dynamic range achievable in the typical listening room?
That is exactly the point I’m trying to make. I spent over 30 as a mechanical engineer and for me the art of engineering is about designing something without overkill. I designed piping systems and quickly learned that if a 1" pipe was the correct size then a 2" pipe was just a waste of money.
High end audio marketing is constantly telling audiophiles that bigger is always better and this just not true. Humans cannot hear the jitter present in any well designed DAC, regardless of how often the audio press carries on about jitter. If you can hear the jitter than something is broken!
As for the increase in the number of values for amplitude that 24 bits gives I ask what is the threshold for human hearing? Can humans hear 0.000001dB versus 0.000002dB - come on let’s get real here.
As I’ve always said when it comes to audiophile myths and their handmaidens (the audiophile press and web sites) - if this kind of nonsense was published by in a photography magazine that magazine would become a laughing stock and would soon be out of business.
But audio is different, the same science that gave you your hifi doesn’t have all the answers and isn’t capable of measuring all that matters. You engineering types should know this, it’s been proven time and time again by the disinterested wife rushing in to pass comment and the afficionados that all have high end systems and hearing more resolving than yours