MQA General Discussion

Possible, but looks bad, because double-compressing JPEG reduces the already poor SNR of JPEG.

Also if the vignetting error is less than dynamic range of the JPEG, you cannot do much about it because the error is not even encoded there.

Since MQA cuts the dynamic range to about 16-bits, lot of the to-be-corrected errors are already lost in the increased noise and the total resolution also otherwise limits the possibilities.

It’s like hiding vignetting error by reducing image to 4-bit by subcolor.

The problem I see with MQA is that they are trying to correct errors using lower resolution than the uncorrected device at either end has without correction…

Exactly. What I wonder is how will redbook masters be treated. Lets face it: If MQA becomes mainstream, the bulk of the music will be MQA-processed redbook. So what will be the deal there? You can’t go lower than 44KHz sampling, if you upsample and do origami folding you will get 24/44 for a much bigger size and nothing more than an apodized file… I’m confused as to what the endgoal is.

Disagree.
Fuji makes a series of cameras with an unconventional sensor with a hexagonal array. For several years, there was no software that could process the RAW file, the only way to use it was with JPEG output. The camera was highly regarded, partially because of that sensor: it was not sensitive to moiré, hence didn’t need an anti-aliasing filter, hence was sharper than conventional cameras of that resolution. This unconventional sensor, which invalidated normal measurements like resolution and also RAW processing, was the very reason it was so successful. The positive evaluations came from taking pictures with it and printing them and looking at them. It was right up there with the best. And yes, people did lens correction. Classical measurements and theoretical analysis was not relevant to that sensor, but the results could be printed at large size with gallery quality.

There is a parallel here with MQA. You and some others make these absolutist statements about MQA, and cameras without having listened to much content in a known, quality system. MQA is an unconventional approach, it is possible that classical measurements and analysis are not very relevant. Just like the Fuji sensor was unconventional. I think we should acknowledge that MQA tries something different, and evaluate it on that basis, and see what we find.

It is also possible that it is misguided, or ineffective. It will be interesting to find out when we listen.

I grant you, there isn’t much content or many devices available yet. So we will have to wait.

@Miguelito: you’re talking about how MQA cannot possibly do any good work on 44/16 content. The Amy Duncan album was recorded in 44/16, and the MQA version is still 44/16; apparently MQA claims that they can improve such content, and Amy is very impressed by the improvement. (I’m not vouching for it, haven’t bought it.) Again, if the originators of the system claim improvements that don’t fit in classical analysis (like camera sensor resolution), and the artist likes it, maybe we should be open to that possibility.

I’m interested on unconventional approaches that perform well both under objective measurements and subjective listening… That’s what I demand from my system.

[quote=“AndersVinberg, post:244, topic:8204”]
Fuji makes a series of cameras with an unconventional sensor with a hexagonal array. For several years, there was no software that could process the RAW file, the only way to use it was with JPEG output. [/quote]
I believe you might be talking about Foveon sensors. These are still in use. The fact that is not a bayer pattern is not really all that important, just a different methodology. Same is true for the new Pentax camera that does pixel shifting to increase resolution. It is a little bit like PCM vs DSD recording.

I don’t mean to make absolutist comments - I have said repeatedly I keep an open mind. But some things are true, like the bit depth limited to 16 bit, or the fact that it is lossy. Not that those are bad attributes necessarily. I listen to a tube amplifier and love the sound and I know that it is less accurate than a solid state amp.

Possibly. Some measurement should be the indicator of accuracy, surely, same as some measure should explain why I like the sound of my tube amp (an Ongaku), which has more of bad than other amps.

Absolutely. But let me be scientifically skeptical, yes?

I listened at Meridian last year, and MQA sounded much better on the pieces selected by them. Fine.

No I didn’t say that. What I said is that a 44/16 file will become a 44/24 file (at least this is my understanding) becoming much bigger (and no it doesn’t compress to a similar size of a 44/16 FLAC). For such a case any information above 44KHz is simply upsampled data - which you could have done on the fly. That’s all I said.

Fuji uses a hexagonal array, from which they render the square pixels of normal files. Means sensor pixel resolution is not a useful metric. Is MQA as unconventional? Don’t know.

Wrt 16-bit data into MQA, there is something funny going on.
The MQA file shows up as a 24-bit file, and Roon shows the same thing. But I read somewhere that for this album, the MQA is still 16-bit. Maybe they just mean that the music is 16-bit, in a 24-bit container to hold the extra info.
But one of the MQA 2L albums (BNB) shows up in Windows as a 24-but FLAC but Roon recognizes it as a 16-bit album. Confusing.

You are right about the size, the MQA FLAC is twice the size of the 16-bit file. But in another album, the MQA was six times smaller than a 352.

I don’t know anything about MQA anymore… I feel I know less now than before! And frankly, noone knows either other than PR lines, except possibly those that have tested some of the files such as Jussi and others, and some that have listened to a few examples (which I have but very hard to draw conclusions without knowing all the facts).

As for SuperCCD vs Bayer arrangement, it’s very close to the analogy between PCM and DSD: it is not really hard at all to understand the mechanisms and resolution, and in some cases (especially with PCM vs DSD) they really are different enough that many comparisons aren’t meaningful. But comparisons like dynamic range, effective bit depth, maximum bandwidth, etc all should be answerable in all cases.

This is actualy a perfect example of what Jussi said. A normal image file uses a normal layout of pixels. Hence with respect to such a layout of pixels in the hexagonal image file has a “error” i.e. the pixels are in the wrong position. Then to correct this “error” the converted image file needs to contain many more pixels i.e. have higher resolution than necessary with a hexagonal grid. Further this does not mean that any of the normal “theories” e.g. sampling theory is wrong. All image sampling techniques has aliasing problems (moire) if you don’t use a low-pass filter, the only difference is that for a normal sensor Moire happens more often due to that human constructions quite often contan multipe straigth parallell lines but not so often multiple “parallell curves” that the hexagonal grid is sensitive to. It would for example be interesting to see what would happen with respect to moire if you take pictures of honeycombs with the hexagonal grid.

Getting back to DAC chips, as long as they are operating in a normal way any correction would then need to “increase” the data. Here HQ-Player comes in, e.g. if a chip operates using DSD256 this is “an error” compared to a PCM file in the same way as above and hence a “correction” is needed. Of course all DAC chips contains a “correction” for this error, just that this correction is of much lower quality than the one in HQ-Player.

@Miguelito, Your posts hardly suggest you are open minded about this. Only a few days ago you stated that the DAC profiling was just “baloney”. One of many similar comments in your 50 (yes fifty) posts on the topic. Good job that technical specs and patents are used to define what DAC profiling is then isn’t it.

For a subject and a technology that you are “not interested” in it is a puzzle as to why you spend so much time trying to discredit it, disprove it and any of the ridicule the people involved with it. Looks like a 50 post crusade!

For those interested in some insight into the Amy Duncan recording and MQA I found this to be interesting.

The question I posed was regarding the Amy Duncan album and it’s recording and MQA packaging; what should a user understand given it is one of the first albums. The ‘MQA PR baloney machine’ kindly answered with the following;-

MQA is a technology for delivering the very best sound quality from a recording, whether it is new, remastered or an archive recovery.

In addition to the sound quality improvements (that come from managing parameters end-to-end such as blur, noise, etc) we have the ability to indicate and guarantee Provenance.

Because MQA has a hierarchical architecture, the decoders extract the best sound matched to the platform you happen to be using. So, e.g. a file sourced from DXD can be unwrapped to 44.1, 88.2, 176.4 or 352.8 to match the device (important in legacy, WiFi and mobile applications).

Now, back to Provenance. When you see the MQA light it means you are hearing it as it should be. We take great care to encourage our content partners to only give us the actual and originally approved master. Our encoder is on the lookout for (and will query) up- or cross-sampling.

MQA is guaranteeing you are getting the real thing. It does not judge the content, rather we take an archivist viewpoint. So, if a recording is made in 44.1/16 and the artist is happy, or it is the only (remaining) document, that’s fine. Equally if it’s made in DXD or DSD256, that’s fine too – as are analogue tapes, cylinders – you get the picture.

In particular, we are not trying to apply any arbitrary definition of ‘High Resolution’ and, as we have commented elsewhere, sample rate and bit depth can be poor indicators of sound quality. My personal perspective on this can be seen in this Open Access paper: Stuart, J.R., ‘Soundboard: High-Resolution Audio’, JAES Vol. 63 No. 10, pp831–832 (Oct 2015) Open Access AES E-Library » Sound Board: High-Resolution Audio

Those of you who have owned Meridan products over the last decade know that good results are possible from CD.

MQA takes this to a new level for all content, using more advanced technology based on modern insights from sampling theory and neuroscience. MQA operating at 44.1 kHz has lower temporal blur than today’s ‘normal’ 192 kHz. (Read that sentence again and think what it means). Of course, it brings higher-rate recordings even closer, literally.

So now, Amy Duncan: This is a lovely recording. It was recorded at 44.1/24. Therefore that’s what we encoded, she approved and we delivered. Don’t be disappointed by any numbers, just listen.

And if you can afford it, try the Nielsen piano recording (also 44.1/24) and argue that it isn’t high definition. https://shop.klicktrack.com/2l/468051

Footnote 1: We do encourage retailers to make the original format clear at point of sale.
Footnote 2: You can always find the ‘OriginalSampleRate’ in the ID3 header as a check - it’s there to help server UI.
Footnote 3: If Explorer2 shows you one light and it’s blue, it means MQA of a 1x. A blue (or green light) accompanied by some white lights indicate in the usual way that the source rate is higher.

I find these two comments extremely revealing. My emphasis below. I believe what Bob Stuart says here. YMMV.

And, for the last time, MQA is not lossy in any sense that is meaningful to human [EDIT: “hearing” (not “beings”)]. If you knew Bob, you would know that he wouldn’t stand for such a thing.

I played some MQA and original DXD recordings (converted on the fly to 192/24) yesterday through MQA decoding to compare the results.

Preliminary results largely confirmed my expectations. MQA encoded file seems to be cut into 88.2/16 first using slow roll-of filter (HF content is out by about 35 kHz) and then encoded as such. At playback side it is then upsampled to 176.4/24 using similar slow roll-off filter leaving with aliases clearly visible up from about 65 kHz…

Lossy is a mathematical property in the sense of meaning not bit-perfect and MQA is lossy in that sense. Every apodising function is lossy in that sense by definition.

Does the particular way that it is lossy matter ? Maybe not. The deblurring may mean that the decoded MQA file has higher fidelity to the original signal which created the master. Cool.

In my view MQA and those who support it just invite pointless argument by reacting to assertions it is lossy. Much better to agree that the decoded file is not bit-perfect with the unencoded master and note that one of the ways it is different is that time smearing has been reduced.

However, they could just encode the result as normal standard FLAC and the resulting file would be smaller (consume less bandwidth) and wouldn’t need all the special hardware hassle for decoding…

So only reason I can see for having the codec hassle is to be able to collect money from both ends of the chain.

1 Like

Agree. Couple of things:

1- When I listened to MQA at Meridian I heard better sound from MQA than the original files

2- The presenters could not tell me anything about these files - eg was the MQA version remastered?

3- Those same presenters and frankly almost the entirety of the MQA PR literature is very dodgy of actual details other than “deblurring”

4- I take back the statement that “DAC profiling is baloney”

5- I do assert that amp/speaker/room are bigger factors than DAC profiling

6- I agree completely that effective bit depth and frequency extension are not the only factors, and all in all a filter that might for example make bit depth worse vs improving time resolution might sound better - @jussi_laako would be able to comment here.

Time and frequency are related by the 1/x rule. Since what I am seeing is that high frequency content is being filtered out by MQA it also means loss of time resolution. In addition increased noise floor makes those high frequency harmonics get lost in the noise sooner.

If you input RedBook to MQA encoder, you may get apodizing effects of their filter just like you get with any other apodizing upsampling filter. But removing so much high frequency content from hires sources in my opinion has an opposite effect, although I can understand that it is technical necessity to make their origami stuff work since there are only few bits available to encode the upper octave. Since the origami reduces compressibility with FLAC delivery format, it just increases bandwidth usage compared to equivalent non-origami. (so the only “advantage” of secret recipe origami is to have class A and class B citizens depending if they have MQA decoder)

2 Likes

Could you explain (or point me to an article) the different choices of apodizing filters, ie more pre or post ringing, and the frequency impact of making the pre/post ringing choices.

It seems to me one important insight the MQA brings here is the “most musical” choice of pre/post ringing, but I might be wrong…

Time and frequency are related but it is certainly possible to have timing errors that are unrelated to frequency resolution. When Thiel addressed time alignment by sloping the baffle, it had huge impact on time precision without any change in frequency response; it did not appear in straightforward frequency response measurements but was readily apparent in an impulse response measurement (which nobody did, pre-Thiel, because speakers were uniformly terrible at it). And now lots of people do sloping baffles, Wilson does complex mechanical movements, Meridian and others do time alignment in DSP crossovers. And Meridian’s EBA addresses the 20 ms errors introduced by the physics of woofer enclosures. And HT processors account for time-of-flight errors because of speaker placement.

It seems to me, once again, that you make simplistic statements that are correct at one level but do not address the substance of the work. Again, I make no judgment on the efficacy of the MQA work, but at least we should acknowledge that it attempts to address an issue that goes beyond the t-f relationship.

Similarly, it is not about apodizing. Meridian has done apodizing in their gear for several years (as have others), but it’s not the same. In fact, they say explicitly that if you listen to MQA without a decoder, one benefit is that the content is pre-apodized. So it is disingenuous to claim that MQA’s time smear reduction is just apodizing.

Really? So you know what MQA is about? I’m all ears!

No, no…you and Jussi have it all figured out already. It’s just a con to make us all buy more hardware and there is no reason why it couldn’t all be done in normal flac files. That is what your posts of this week say isn’t it ?