MQA disappointing

It means whatever is available at the sample rate of the data. If it’s a 96 kHz file, there is ultrasonic data only up to 48 kHz. If it’s a 192 kHz file, there is ultrasonic data up to a maximum of 96 kHz. I used the word ‘available’ for that reason.

"But only the 13 most significant bits are losslessly encoded as the 13 most significant bits in the output PCM. The next 3 bits are “lossy”.

Following that are 4+4=8 bits of encoded MQA data.

I’ll be generous and say that the effective “lossless” bit depth (before noise-shaping") is 16 bits. A critic would say “13 bits”.

You must be referring to 16 bit data in the above. There is no reason why, for a 24bit file and an MQA decoder, the full 17 bit signal isn’t recovered losslessly.

“Why are we leaving this one alone? The whole discussion in the other thread (from which our comments were yanked) was about 16/44.1 MQA on Tidal.”

I left it out because I don’t have good information on it. How were the 16/44.1 MQA files on Tidal prepared, meaning the ones that replaced CDs? You’re assuming they were just encoded following the block diagram in fig. 7A for a 96kHz/24b file and then truncated. I doubt that’s the case (but don’t know).

“Sorry, but everyone (including Bob Stuart) agrees that the compression block in Fig. 7A is lossy. It is just not possible to fit losslessly-compressed 96kHz data into the 8 LSBs of a 48kHz file.”

Do you understand what’s in the block labelled compression? Bob Stuart explains it (MQA website) as predictive coding plus a touch-up signal that reconstructs to lossless. That’s a standard technique in lossless compression and should preserve the HF section losslessly.

" (Obviously, you understand that if there really were audio content out to 70-80 kHz, you’d need to sample at more than twice that frequency to capture it. 24/48 MQA is only supposed to unfold to 96kHz, which would (lossily!) capture audio signal out to 48 kHz.)"

Sure, but MQA also treats 192 kHz signals and above. There is a third section above 48kHz that is encoded at least to some extent for files at 192k. The block diagram on the MQA site doesn’t have much detail about what’s included but it’s there.

“I suggest you Google “quantization noise”.

Thanks no, I can write part of a treatise on it already :-/
Looking at your answer, I suspect you were talking again about 16b vs 13 b signals. I was referring to the17 (real) bits of a full 24b decode. The usual argument is that MQA is throwing away data from a 24b signal by using the low bits for encoded information. That’s what I was discussing, and that is about SNR of the signal.

By the way, you mention that “everyone mastering a 16/44.1 signal uses noise shaped dither.” Actually that’s very far from the situation I’m familiar with, where the large majority of 16/44.1 use TPDF.

Every writeup on MQA ever written says that, if you feed a 24/44.1 MQA file to a non-MQA-aware DAC, the output is Redbook CD quality.

For that to be true, the 16 MSBs (up to the application of dither) must be the same as what you would find in the 16/44.1 PCM data on a Redbook CD. The remaining 8 bits can be anything you want.

Now, a naive reading of the block diagram seems to contradict this. Only bits 1-13 are copied directly from the input (and dithered). Bits 14-17 from the input are mapped to bits 17-20 in the output.

If that’s the case, then played on a non-MQA-aware DAC, a 24/44.1 MQA file would have only an effective bit-depth of 13 bits — considerably worse than CD-quality.

Is that what you claim is correct?

Or is my more charitable interpretation of Bob Stuart — namely that the 16 MSBs are (up to dithering) the same as the input — correct?

I realize we are arguing about 1 bit here (16-bits vs 17 bits). But if your interpretation is correct, then the above claim about CD-quality playback on non-MQA-aware equipment is a lie.

No, I wasn’t assuming that.

I was assuming, as always, that MQA is ordinary PCM with some number of LSBs devoted to the MQA data (and the remaining MSBs dithered appropriately). For 24/44.1 MQA files (unfolding to 88.2), the number of LSBs devoted to the MQA data is 8. For MQA CDs (16/44.1), the number of LSBs devoted to the MQA data is invariably quoted to be 3. But whatever number it is, that’s how much bit-depth you had to sacrifice to encode the MQA data.

Again, that’s assuming that all of the MSBs are devoted to PCM data, interpretable by a standard DAC. If that’s not the case, then MQA CDs are even worse than I said.

Sorry. I don’t know what “predictive coding plus a touch-up signal” means. But I do know something about lossless compression schemes. If all 8 bits were devoted to storing the samples above 48 kHz, you’d need a factor of 4 compression. What lossless compression scheme can reliably get a factor of 4 compression on arbitrary data?

2 Likes

Andre, I neither said “MQA sounds great” nor “haters always gonna hate”. What I did say was that in my experience the SQ differences between MQA and high res are usually very difficult (if not impossible) to hear (my ears, my system). This neither makes me an MQA hater nor an MQA fan. The “inconvenient” thing about this more neutral point of view is that people like me tend to get a lot of flak from both camps, i.e. haters and fans.

6 Likes

You definitely do NOT have to be a “hater” to understand why people are skeptical about MQA. And this way of thinking has absolutely nothing to do with “feelings of superiority”…

7 Likes

I couldn’t agree more, @hwz. “If you aren’t 100% for me, you must be against me.” This “black and white” way of thinking is the end of any productive discussion…

1 Like

Not arguing the fact, just the Wiki entry and noting the concerted campaign against it, Also noting human nature.

That it sounds better?

No, that the earth is a cubus.

I think that needs walking through?

I came across this today.

And, if you read that statement you quoted in the context of my entire post, I agree with you! :slightly_smiling_face:

Is that what you claim is correct?

I’m not claiming anything since we’re all guessing. But I think something different might be the case. Bob Stuart said somewhere, maybe in an interview, that the encoder is flexible and that sound engineers have a choice about bit allocation. That being true, I’d assume that block diagrams like the one in figure 7 are just examples, and that there may be other bit allocations available. It’s only handled between the encoder and decoder and could be signalled by a flag that toggles between bit schemes known to both. In particular, if the file starts off as 44.1 with no ultrasonic data, I wouldn’t expect that the packing scheme shown for high res data would be used (new Tidal files). The question you pose about handling of packed high res files sent to a non-MQA DAC isn’t anything I can answer. For one thing, a purely 16b device can’t normally accept 24b signals, so if the sending source is Tidal, it might precondition the MQA file (dither?) before reducing to 16b.

Sorry. I don’t know what “predictive coding plus a touch-up signal” means. But I do know something about lossless compression schemes. If all 8 bits were devoted to storing the samples above 48 kHz, you’d need a factor of 4 compression. What lossless compression scheme can reliably get a factor of 4 compression on arbitrary data?

A simple illustration. Suppose you have a waveform that is relatively flat for a while. The prediction is a fitting function that approximates the shape of the waveform, which in this case could be a simple rectangle. The second step is creating a difference signal by subtracting the rectangle from the waveform. The difference is small so can be represented in a few bits. The third step is transmitting the low-bit difference signal (touch-up) and the parameters of the rectangle. The decoder regenerates the rectangle from the parameters and adds back the touch-up to reproduce the original. It’s a good scheme with many variations and can use very few bits if the waveform is well-behaved.

Well, we should look at the file format and the documentation and it will be obvious. What? None of that exists and it’s proprietary bullsh;t?! Well, say it isn’t so, history DOES repeat itself…

1 Like

I’m glad I haven’t the intelligence or inclination to understand all this so I can just press play and enjoy whatever comes out of my speakers.
It seems like a good and lively debate though!

2 Likes

Well, that’s a step up from being misinformed.

In any case, I do insist that at most one of the following two statements can be correct:

  • A 24/44.1 MQA file, produced from a high-resolution source, plays back on non-MQA-aware equipment at full CD quality.
  • The block diagram 7A that you pointed to in calling me “misinformed” is an accurate depiction of the process of creation of such a 24/44.1 MQA file.

People have investigated MQA-encoded files “in the wild” and AFAIAA, they have found 2 types

  • 24-bit files with 8 LSBs devoted to MQA data. (Presumably, these are encoded in a fashion similar to, but not quite the same as, that depicted in the above block diagram.)
  • 16-bit files with 3 LSBs devoted to MQA data.

There’s supposed to be a 3rd type:

  • 24-bit files with 3 LSBs devoted to MQA data, produced from original sources with a 44.1 or 48kHz sample rate – hence no “origami” to include high-sample rate data.

But I don’t know that anyone has actually analyzed such a file, nor whether you could tell, in that case, how many LSBs are devoted to MQA data.

If there are more types, someone should step forward with an analysis.

Any compression scheme — lossy or lossless — starts by building a model of the signal. FLAC uses an LPC (Linear Predictive Coding) model. You then subtract the model from the actual signal to obtain the residual. In FLAC, the residual is losslessly-compressed (using Huffman coding). In good circumstances, you achieve a factor of 2 or a bit better compression.

It’s just false that the residual is “only a few bits”. And — unless Bob Stuart has invented a wicked-ass compression scheme that he’s not telling anyone about — there’s no way it can be losslessly compressed in the small number of bits available. Everything that I have read on the subject indicates that the compression of the residual here is lossy.

One description that I have read is that the residual is truncated to some ridiculously-low bit depth, and the result is “losslessly” compressed. If so (and you can never tell with Bob Stuart), then it’s dominated by quantization noise. And I certainly wouldn’t call that procedure “lossless.”

3 Likes

I’m not going to argue this further since it’s all covered in MQA patents, the only issue being that implementations can differ from patents. Figure 7 that I cited earlier is from one of the initial main patents. They are all readable if you have some DSP exposure and I recommend them.

Keep in mind when you’re arguing compression schemes that the predictive coding/touch-up signal is applied only in the high frequency part of the bandsplit. In other words, above 24 kHz where the 1/f amplitude characteristic assures that amplitudes in the ultrasonic region are relatively low. So the information content of the signal is lower than the channel information capacity in this region, and correspondingly the compression required is less. In doing an algorithm like MQA, the designers, Peter Craven and Bob Stuart along with contributions from (deceased) Michael Gerzon, make full use of knowledge of signal characteristics, as well as hearing limits. They are hardly amateurs. Sorry but to me, it’s your last paragraph that sounds like a myth.

Also, with all compression, whether look-up table methods like Huffman and Liv-Zempel, or predictive coding, the compression ratio of ~2 is an average. It depends on signal characteristics and varies by signal region.

I’d be extremely careful about “in the wild” (i.e. trolling) exercises that attempt to reverse engineer MQA based on hacking.

2 Likes

“Don’t believe any independent verification of our extraordinary claims”, said every cult ever. P.S: the unbelievers are such for they are evil.

1 Like

Independent verification based on what? You overstate the kind of supposed evidence. Any valid verification has to be based on an understanding of the original product as well as access to it, and I don’t see signs of that.
In certain audiophile web forums, the only acceptable position is to bash Bob Stuart, and anyone who doesn’t is either a shill or a company employee. Do you ever question who the cult is?

2 Likes

No: I deride attempts to reduce understanding of intentionally obfuscated technology which ambitions to become a standard to “trolling”. @jussi_laako and the others deserve more than your contempt.

Have you ever wondered whether much of the paranoia had a solid basis in reality, including one side of the debate having access to both marketing budgets and years of incestuous relationships within the industry that portends to keep 'em marketing claims in check ?

Have you ever wondered whether much of the paranoia had a solid basis in reality, including one side of the debate having access to both marketing budgets and years of incestuous relationships within the industry that portends to keep 'em marketing claims in check ?

No. I don’t argue business models which are a separate thing and not my area. Most attacks are on the MQA process. The algorithm is a sophisticated DSP design. It is scientifically credible (from all I’ve seen) and done by experts in lossless compression but it requires enough DSP background to follow. Most of it seems to be available by now as patents. What I don’t envy Bob Stuart for is the need to explain it to audiophiles who lack DSP background, but then who believe unquestioningly any negative comments made by their friends. MQA may have competitors, which is a usual reason for obfuscation. That’s true for many other audio algorithms.

1 Like