MQA software decoding in Roon

This article explicitly says it does not address the technology, which is what we have been discussing there.
But the arguments about the business model seem weak.

  1. “MQA will hoover up lots of money from the supply chain”. That’s a theoretical argument. Let’s look at real-world pricing. I know of two places to buy MQA files, and one place to stream (“rent”) them. I checked some examples.
    2L Magnificat:
    MP3 $9
    16/44: $14
    24/96: $19
    24/192, DSD64: $23
    DSD128, MQA “original resolution”: $24
    DSD256, 24/352: $30

Two classical albums:
24/96: $16.60
MQA: $17.60

Two jazz albums:
24/96: $20
MQA: $20.90

16/44 and MQA: same price

  1. DRM
    This is a hypothetical argument that something could magically lock up the files that you can play on your DAC. How could that happen? Some update download with unannounced capabilities? Lawsuits everywhere. Companies generally don’t survive that.

  2. Middleman stifling creativity
    This technology will be successful in the marketplace, or it will not. Consumers and artists can choose whether to use it or not. If few artists or few consumers choose to use it, it will die. Like some others have.

Linn seems to have a strange view of pricing power.

In my view, a waste of pixels.

EDIT: the choice between owning and renting is unrelated.

The paper I referenced above calls out on pages 8 and 9 non-Shannon sampling which contains, per sample, more information than Shannon sampled audio. It doesn’t take a great leap to imagine that the 192k and higher unfolds reconstruct the original 4x/8x sample rate from the fully backwards-compatible and playable 96k “Shannon” samples.

1 Like

Again, this is well explained in their papers.

They argue that the rectangular approach to information transfer is inefficient. “Rectangular” means that all frequencies, up to half the sampling rate, are granted the full 24-bit information content. But in practice they don’t need that: we never have (and can’t accept) full scale signals at high frequencies. And the noise floor is well above 24 bits. So they can save a lot of bandwidth by not granting all that space to the total rectangle, and passing through only the interesting data range.

So only a few of the 24 bits at the higher frequencies need to be transmitted, and those can fit under the noise floor.

This is not new. It was part of the original explanations in 2015.

1 Like

I remember Bob Stuart saying that MQA is a Flac file and as such, know one can stop it. If you can play flac/Wav you can play undecoded MQA.
So Linn or anyone else cannot block it. Consumers will eventually decide with their ears. Perhaps they don’t like this aspect and feel the need to lash out a bit.

They are. And nowhere in the MQA documentation have they claimed that they do upsampling. In fact, they explicitly denied (CA, I think?) the suggestion in a reader question that MQA unfolding uses the apodizing upsampling that Meridian used early, that the same engineering team developed.

I find it odd that you persist in these statements that have no basis in any of the documentation, and are quite explicitly denied.

Including in the data stream hints about upsampling approach is a clever idea. Maybe you could patent it. But it is not MQA.


And Linn Records pricing is $24 for 192/24 files and $13 for CDs. So given they have no costs for MQA they must be extracting more $$$ from the supply chain than MQA is for hires files. An additional $11 for not chopping the file down to 44/16 who is really has the high margins on hires content and is making the “outright grab” they accuse MQA of!


Maybe if we all stopped using the marketing term “unfolded” and stuck with upsampling/oversampling etc. we could be more objective about things?

No, because it isn’t upsampling or oversampling.
This is a technical comment. Read the documentation.

We could use other terms, like turbocharging or radial or refractive.
Would not make this discussion objective.


If something was originally 192k then upsampling or oversampling is not the correct terminology. In fact it is probably sensible to stick to the marketing terms to avoid the confusion caused by those who insist on conflating the MQA process with other more common methods of signal processing.

I think the reason the bandwidth is the same in that plot is because it refers to a communication bandwidth, meaning the number of bits transmitted when delivering the digital (encoded/folded) MQA file in its FLAC container. A certain conclusion from these plots is that MQA compresses the original file such that the information in the music sampled at rates higher than 96kHz isn’t taking a lot of bits. You are inferring that because there aren’t many bits allocated to the high sampling rates that MQA actually stores no information captured by these sampling frequencies and it is simply “upsampling.” Based on the referenced provided on this thread and by the Roon team on MQA, I don’t think you are right. MQA is reconstructing the content originally sampled with high frequencies using an algorithm that decodes the original information content as close to the source analog signal as they can determine from the studio master (and its studio digitization.)

Your description of upsampling is certainly valid but it isn’t what MQA is doing in the decoding as far as I can tell from what I’ve read.

I think one reason there is much confusion on the thread (and elsewhere) about this is because people equate higher sampling frequency with higher information content. That is simply not the case. High frequency information content does require high frequency sampling rates to properly capture the signal but just because a signal was sampled at 352kHz it doesn’t mean it has 352kHz worth of information. That is solely a function of the original analog signal (and actually, the true information content was the live music in the studio, it is all a representation after recording, mastering, etc.). The MQA approach is to put less focus on the sampling rates and more on the actual analog signal information content.


The reason is because MQA argues that the “language of digital” does not properly represent the true analog information content. For example, when you see a file is 24 bits / 192 kHz you infer this “high resolution” representation of the music is higher fidelity and better sound quality than a 16 bit /44.1 kHz file. As we have seen a number of times, the sampling is actually much less important than the mastering or the quality of the audio reproduction system and that’s because the sampling and bits (the digital domain parameters) don’t really represent completely the true fidelity of the digital file.

MQA argues digital is good for storage and distribution. Analog is where listening actually takes place. The MQA encoding/decoding process is intended to best present to your ears the actual information in the original master. Their scheme works because they have found that assigning 24 bits to a high resolution file has a lot of “wasted space” meaning many of the bits don’t carry “true information” (just the variations caused by analog-digital conversion). They are willing to throw away some of this digital data in order to use part of that space to encode the information that later gets unfolded during the decoding process.

I’m not a studio engineer or sound professional but I did take graduate-level linear mathematics and image processing courses and I remember distinctly that digital sampling is always an approximation of the original analog signal. You can get better approximation when you know something about the impulse response of the system (the system that converted the analog signal to a digital signal)…the MQA emphasis on knowing/controlling ADC’s used in mastering and DAC’s used in playback are an attempt to use the impulse response of the end-to-end system to improve audio reproduction.


Thanks @Ronnie, I’ve some skepticism on the articles but I tend to agree the last paragraph what he said:

'In the end I’m confident that the free, readily available, high quality, open-source alternatives will win out. Lock down, centralisation and profiteering has a tendency towards failure.

In my opinion I saw this in the past and many didn’t make it. I guess only time can tell…

No matter how narrow the triangular information space becomes above 22.05/24 kHz, no matter how few bits are needed to encode that high frequency information, the bugaboo is that MQA after the first unfold somehow is able to pass subsequent unfold information potentially but successfully through intermediate DSP – such as surround processing, room correction, etc. If applied prior to the first unfold, that same DSP would destroy MQA encoding for the first unfold, which requires bit for bit accuracy. Furthermore, any subsequent unfolds can be accomplished by very low processing power renderers. So, something is materially different about those subsequent unfolds. And that arouses questions, rightly so.


1 Like

@erich6, thanks for the explanation. The argument of compromise digital data in order to get a better analog reproduction is based on their ‘claims’ not from engineering point of view.

In my line of engineering work, digital technology is used to capture data in great precision so we can analyse, process without any doubt. If compromise is being made then it will severely impact out our accuracy.

What digital technology does it captures analog in great precision (lossless) and if they claims otherwise then they need to support their ideas with engineering and scientific proofs. Making such ‘claims’ will simply raise many doubts and backlash.

Yeah, I agree something different is happening with the first unfold than subsequent ones. I haven’t dug deep enough in the technical references to figure it out yet.

@MusicEar, Fair comment. I suspect that for well-engineered masters and distributions MQA won’t make much of a difference in apparent sound quality. That’s certainly been the case for the limited samples I’ve tested. It is an interesting scheme for low-bandwidth digital distribution and it emphasizes good end-to-end engineering considerations…both are positive aspects of the approach.

What evidence exists that these 2nd and 3rd stage unfolds can happen with “low processing power” renderers??

If you’re simply referring to the Price of equipment like the Dragonfly and Explorer, then it must be remembered that the processing power within these devices are vastly greater than what was in “premium” DACS of 10 years ago…as processing power has vastly increased in power and reduced in price during that time

That does NOT make the Dragonfly / Explorer better DAC’s than those same 10-year premium DAC’s…but if you’re solely basing your claim of “low processing power” based on their price, then that would be a mistake IMHO

Price is not relevant, nor is comparison to 10 year old premium DACs. Processing power in renderers has been shown to be relatively limited.


1 Like

I don’t regard simply looking at a spec sheet to be “substantiation” of your claim of low processing power…as he admits himself, he doesn’t know what is else is within that design

Read the comments to the article. There is plenty more evidence. You are swimming against the current.