MQA General Discussion

I’m not interested in decoder, I’m only interested checking the performance and properties of the decoded MQA.

Understood.

It just strikes me as obvious that if you buy, say, an Explorer2 and modify it to get I2S or SPDIF out, you could use it as a USB bridge to your high-end DAC.

[quote=“miguelito, post:222, topic:8204”]
Explorer2 and modify it to get I2S or SPDIF out, you could use it as a USB bridge to your high-end DAC
[/quote]However, MQA would be using the DAC profile on the chip within the Explorer2 and not the one in the other device that it would be feeding. So whilst it might work it would not yield the true MQA experience.

Correct. But I really think this “DAC profile” is a bunch of baloney. And you cannot possibly tell me that the chip in the Explorer2 will, with the DAC profile and all, deliver better sound than a high end DAC.

Happy to be proven wrong…

I bet Bob Stuart is quaking in his shoes.

1 Like

Ok, let me qualify this a bit…

I would expect a “DAC profile” includes things like max sample rate handled. This is of course required information. If the DAC chip in the Explorer2 has a max sample rate of 96KHz, then that’s the max you’ll get.

As for changes to the MQA decoding itself given a target sample rate, if there are any “tweaks” required, I would expect such tweaks to overcome shortcomings in lower end DAC chips. High end DACs sound great with “standard” PCM streams assuming those streams are of high quality - no odd massaging required. This is what I mean.

If the DAC chip in the Explorer2 is cheap enough that a lot of such massaging is required to make the Explorer2 sound good, then I agree that output stream might not be good for a better DAC. But I doubt this is the case.

[quote=“miguelito, post:224, topic:8204”]
But I really think this “DAC profile” is a bunch of balone
[/quote]That’s view I don’t share.

Take an optical lens, it can be measured and its characteristics (defects) mapped.
These characteristics can then be used to process the resultant image in order to correct it.

The DAC profiling in MQA is performing a similar function, and yields a more accurate analogue output.

Now assuming your high-end DAC is perfect and the Explore2 not so.
MQA will be applying corrections that are unsuitable for the other DAC, and thus the resultant analogue signal will not be as it should be.

[quote=“miguelito, post:224, topic:8204”]
And you cannot possibly tell me that the chip in the Explorer2 will, with the DAC profile and all, deliver better sound than a high end DAC.
[/quote]I did not say that at all, I have no reference to be able to make that comparison.

Yes, I agree with this in principle. It is similar to monitor calibration to get good color rendition. However, unlike the monitor calibration case, there are many factors that affect the sound - amp, speakers, room - and these factors have a much greater impact than the profile of a chip.

The analogy still stands up. Viewing a printed image in poor/low/high/coloured light will destroy the image we see, whatever is done with colour profiles. That’s the same as your concern that amp, speakers, room will damage what comes out of the DAC.

There is no way MQA can compensate for what is downstream of the DAC, but it can compensate for the ADC-DAC chain. And it’s worth doing that, even if the user is able to then put a rubbish amplifier after the DAC.

Watching a calibrated monitor in a mildly lit room is not like the room/speaker case. But I can’t argue this point, and frankly I can’t really argue the DAC profile point but neither can you other than accepting the PR line from MQA.

“Compensating for the DAC” is something that I would take with a grain of salt (or 10) when you’re talking about high end DACs.

For most of us MQA will be a non-event until such time as large chunks of our existing collections become available in MQA format…and then only if it does indeed work magic.

1 Like

My only interest in MQA is streaming on TIDAL.

I do see value in MQA as an indicator of careful mastering, but since that comes married with a lossy compression I have an issue with owning material in this format. I expect anyone bothering with MQA at all would produce a high quality master anyway, so that advantage is mostly academic.

1 Like

Biggest problem is that many defects happen above the Nyquist frequency of PCM sources and/or happen due to insufficient DAC chip performance in it’s DSP engine and conversion stage. And that is something you cannot correct by performing computations within the PCM sampling rates… Since MQA seems to have about 16-bit worth of dynamic range, most of the correction is lost in the added noise.

So returning to camera analogy, you are already lost when you try to perform these corrections on the lossy JPEG produced by the camera. You need to get on the raw image sensor data, and still if the image sensor has less resolution and thus greater errors than the lens you cannot correct the lens by playing around with low resolution image.

Would you re-purchase all your existing content collection in MQA? I wouldn’t re-purchase mine even if it became available as true hires.

I’ve been hit by closed content formats before. I have bunch of HD-DVD’s I cannot play since there are not players anymore. Same will likely happen to SACD and Bluray soon. No, I will just refuse to buy any content in a format that is not documented openly and cannot be decoded by any third party implementation based on the openly available information.

2 Likes

I said ‘for most of us’. Personally I see no point in repurchasing anything I already own unless it’s a favourite and significantly better. I also have no intention of replacing my DACs anytime soon, so will have to live without MQA unless it’s handled within Roon. Havent heard any MQA so cannot comment as to sound.

And MQA is very hard trying to screw up performing digital room correction which I’m doing in my system. I run the correction filters in the player. And the errors corrected by those filters are at least order of magnitude higher than what the DAC profile would do.

1 Like

Not commenting on the audio processing, but the photo analogy is incorrect.
Certainly possible and useful to do lens correction on jpeg data.

Not quite correct if you’re talking about geometry (pincushion type lens distortion) or vignetting (light fall-off at the corners). Today many mid/higher end point and shoots will do geometry/vignetting correction when producing the jpeg. You can also do this in lightroom for just about any lens available, either in raw or jpeg domain.

I think the same. I have heard MQA at Meridian last year, and it did sound a lot better, but the provenance of the non-MQA vs MQA (ie was it remastered?) was not clarified by the hosts.

Let’s say you take VGA resolution version (640x480) of the picture, attempt to do geometry corrections on it and then store the result in the same VGA resolution, the result would be really horrible. It looks bad even if you use some other display resolution that matches your screen, such as full-HD 1920x1080. You need to perform geometry corrections at significantly higher resolution than your intended display resolution. If you start with 1:1 (pixel accurate) resolution, the result is less than pixel-to-pixel. For level corrections such as vignetting this may be less notable, but you would need more dynamic range than what JPEG provides.

Also if you start with VGA resolution JPEG and the pincushion error is only quarter-pixel at the edge, you just don’t have the resolution to properly do anything about it. Also if the vignetting error is in 9th bit in color, your JPEG won’t have dynamic range to even represent the error.

This is same for audio, if you do digital room correction, you want to output at higher resolution than the source. What you certainly don’t want to do is to start with 44.1/16, do digital room correction and then output 44.1/16 again. Instead you’d want to output at least 88.2/24.

Yes, they do it before encoding the JPEG…

Doing any notable editing on JPEG reduces the already poor JPEG quality even further because you do significant operations on lossy-encoded file and possibly re-encode it again in the lossy format. This would be similar to opening MP3 in audio editor, doing editing and then storing the result again in MP3 format…