Why MQA is bad and Roon (shouldn't bother) (shouldn't be bothering) shouldn't have bothered with it :)

Why assume it has to be more than just simple render instructions? You leave room for future development, say like HDCP type control.

MQA isn’t a solution designed for the end user, some keep forgetting that.

We are of the same belief. MQA is the opening salvo in an attempt to control music consumption thru DRM-like controls. It’s for the music producers, not the music consumers.

Folks can argue all the technical details they want, which I’ll admit is beyond my motivational level to read/understand, but still miss the ultimate nefariousness that lies hidden is the whole MQA propaganda. There’s a reason big music producing companies and streamers, like TIDAL, are supporting MQA.

When the day comes in the not distant future that one can only listen to music on a CD player or through streaming, all the technical discussions will center around DRM busting and not dubious improvements in SQ.

On that point, I have a non-snarky question to any who care to answer. If the rendered output, if I’m using the right term, is a result of up-sampling then how is this better than up-sampling a straight PCM file?

1 Like

Do you really care @xxx?

peace

The best description of the MQA encapsulation process (or something like their process) can be found here: http://www.audiomisc.co.uk/MQA/origami/ThereAndBack.html

You can see that this process does – as if magically – ‘unfold’ copies of the components that were aliased during downsampling.

That is why they describe it as a “fold” and use the origami diagram. Its true that this process also generates upward and downward aliases. MQA assert that these are not audible.

You also have to remember that they are only trying to keep some of the frequency data, say from 48 to 60Hz. Anything beyond that is just ultrasonic noise.

We already know what (some) of the rendering instructions are.

How Rendering works

There is no question or controversy about Roon doing the first unfold, or about the ability to remove and then restore the rendering instructions.

My point is simple: the claim that rendering is just upsampling contradicts the ABC diagram, and other descriptions provided by MQA. So the people who make that claim, that rendering is just upsampling, have to explain that discrepancy.

It contradicts it because upsampling as we normally think of it is fixed, other than the 96k data stream there is no time-varying accompanying data stream, just an instruction “use filter 6” out of the 32 MQA filters that Archimago has found.

@MusicFidelity I did ask Brian in this thread, but as expected he cannot disclose inside information,

But it seems clear to me that rendering is not just upsampling.

I have a vague Luddite curiosity, but mostly I want to use it as a straw man.

I suppose it won’t be answered since the technical discussion has marginalized me in a thread I started.

:laughing:

Yes I noticed that, hence my question :wink: I guess you have to start another one :joy:
suggestion: Maximum Quarrel Audio

3 Likes
  • My guess is that MQA made up the terms. “L-fold” is a lossless fold and “E-fold” is the lossy encapsulation fold

  • For a better idea of what the non-Shannon downsampling is, read their patent. I found the link: https://patents.google.com/patent/WO2014108677A1/en.

How it works is this:

  1. MQA run the music file (lets say original sample rate 192kHz) through analysis software

  2. The software determines things like content bit depth, highest non noise frequency, noise levels, etc. It picks the ideal downsampling filter (non Shannon == produces aliases).

  3. The 192kHz file is downsampled to 96kHz. Elements of the original data series from 96-192kHz are downward aliased into the 0-96kHz data.

  • Finally as part of the encapsulation process, rendering data is written as meta data onto the 96Khz MQA file: signal bit-depth, original sample rate, upsampling filter to use, dither and noise shaping instructions for the renderer, etc.

  • Let’s skip the whole MQA encode/decode stage. That’s conceptually a separate process which is more or less lossless and converts the 96kHz file into a 48kHz file. Let’s assume the process is entirely lossless, and that the decoder has just ran, and we are sitting with the same file produced by the encapsulation process.

  • The renderer then reads in the meta-data and operates the upsampler, dither, noise-shaper, etc.

  • Because the 96-192kHz data was downward aliased into the 96kHz file, the data points will “magically re-appear” once upsampled.

I believe this answers the original question regarding the folding of this data?

I don’t think Archimago found anything, Mans on the CA is the one who reverse engineered it and provided the information to Archimago. Maybe approach him and he can explain what he found. MQA themselves certainly won’t divulge more information than required to keep people guessing as they don’t want those details public, just like the vague “De-blurring”.

https://www.computeraudiophile.com/profile/24458-mansr/

What is the vagueness about deblurring? They are reducing the impulse response length of the entire end to end system and removing ringing especially pre ringing.

1 Like

Pre-ringing created where? As far as I understand it this ringing boogie-man is an artifact or D/A reconstruction with lower sample rates. What are they deblurring up front, an artifact of the A/D analog filter? Andrew posted a link indicating that pre-ringing isn’t an issue there.

Once A/D captures are done at higher sample rates the ringing boogie-man is vastly reduced.

1 Like

Just quoting from their own patent:

  • “Ringing” of filters used for antialiassing and reconstruction is undesirable, even if in the high ultrasonic range 40kHz-100kHz.
  • A pre-ring is usually more of a problem than a post-ring, but both are bad.
  • It seems best if the temporal extent of the total system impulse response can be minimised.

Further more, if the majority of the archive has already been A/D captured at a low rate, presumably ringing is still theoretical a problem. I agree with newly recorded material at high data rates it should not be an issue.

So the stuff that we know has been captured at high res and converted to MQA, we really have no idea what this de-blurring is nor can we isolate it from the origami folding to validate the marketing claims. Of course, the renderer’s slow roll off minimum phase filter aside.

If MQA is eventually going to start accessing low sample rate source digital for conversion, there are definitely nasties that need to be fixed. But how often has that been done so far? I assume it would be 16bit MQA?

I see absolutely no need for MQA when the capture was already done with modern A/D hardware at higher sample rates. The only reason to do it at this point is to protect the studio’s crown jewels. The bandwidth argument from a supplier stand point, maybe, but even that doesn’t hold much water.

This is the problem many of us have with MQA, it’s a solution to what problem?

Yes, I am aware of Måns’s work.
But he has published reverse-engineered code.
Analyzing it seems like work :slightly_smiling_face:

“undesirable”
“problem”
“bad”
“best”

Because MQA says so.

AJ

Almost anything and everything converted with a modern ADC from the early 1990s to the present has been converted with an oversampling ADC. That means a high sample rate followed by a downsampling digital filter, most often to 44.1/48 kHz for CD or DVD, though to ≧88.2/96 kHz rates for other releases in more recent years. And in almost all cases, that digital filter is linear phase, hence both pre ringing and post ringing. So, that ringing – even at high sample rates – triggers MQA’s prejudice, despite sinc based brick wall digital filtering being the theoretical and practical standard in digital communications.

AJ

Additionally the AES paper demonstrates that for MQA the system impulse response is 10 times faster than a traditional 192kHz pcm file transmitted via regular ad and data converters.

So even for contemporary hi res they claim an improvement.

The timing issue and the ringing problem have been central in their work for a long time.
They were early (first?) with apodizing filters, which are now adopted by many highly regarded systems.
The DSP speakers address it in several ways.

Are you really criticizing an engineering group for having design principles they believe in?

Sure, absolutely.

Digital audio for a long time now has been about inventing problems first, then inventing solutions for those problems second. Never mind the lack of conclusive evidence of those problems. Meridian and MQA are far from alone in that regard. Create a different or specific digital audio philosophy, then stick to that narrative as a selling point.

Any and all who do so are subject to constructive criticism.

AJ

2 Likes