Using some of the bits in the samples to convey high frequency data instead of high dynamic range data is just a choice of encoding scheme. This is commonplace in signal transmission.
We transfer bits. It’s just a convention to say that they correspond in a fixed manner to 2 channels of 24 bit words at a 96k rate. We may convert from analog that way, but transmission is a different matter.
Insisting that the signal sample rate correspond directly to the audio frequency, and the signal sample word length correcspond directly to audio dynamic range, is unnecessarily simplified and backward-looking. It creates the rectangular data space that is very inefficient.
I don’t think there is anything controversial about coming up with a more efficient use of the data rate, of the bits. Happens all the time. Look at how pkzip encodes the data, nobody complains about that.
Once you accept that the idea can work, you can then discuss whether the particular implementation is effective and efficient and harmless. But decrying it as unnatural is meaningless.
Nobody complains because there’s no loss of data, it is lossless packing, mathematically identical. The same goes to FLAC. In digital recordings the effective 24 bit data in a real world exists (2 to the power of 24 combination) but we simply can’t utilise the full range of a 24 bit resolution or the dynamic range. What Bob Stuart said is correct and I strongly agreed.
17 bit at 96kHz is still considered Hi-Res but there’s more effective way to encode in pure FLAC and still produce a comparable file size, lossless and able to playback on virtually any devices out there.
Yes, I have seen Jussi’s claim for that, for example.
If you accept that compact encoding of high resolution data could be useful, then it is fair game for anybody to attempt to address this opportunity with alternate innovations.
The challenge is that a solution requires not just an encoding scheme with the balancing of various tradeoffs, but also getting the publishers and streaming services and hardware and software manufacturers to support it. The MQA team are working hard at that, and there is a lot of complaining about the slow pace of adoption.
So far, I have seen no content published in any alternate formats.
As more tests are revealed vs what are claimed, we begin to see the real truth! There’s a trade off between implementing a good time domain correction vs aliasing effects. Interestingly, there’s more impulse response digital filters than any other standard PCM! Which one it actually use, without the knowledge of the users are still a mysterious… Re-sampling anything above 96kHz to 384kHz does not contain any real information, the first unfold up to 48kHz is the more important.
What I don’t like is someone takes a 44.1/48kHz and give you 88.2/96kHz and sell it as hi-res; obviously this is cheating. It had happened elsewhere before and now we can seeing all over again
Well you are still here, why would not anyone else still be here?
It seems to me there are basically two camps on MQA, those that accept the claims made by the MQA team and consequently like it, and those that are sceptical about it. I belong in the sceptic camp. The more info that gets put into the public domain by the likes of Archimago the more reasons there are to be a sceptic. There is some truth in the saying there is no such thing as a free lunch. MQA cannot deliver audio Nirvana with no downsides. There is no magic in what it does, just obfuscation.
A debate is a good thing if it is informed and we are not fooled by professional dressing up of flawed analysis.
Archimago often posts some good stuff, but I’ve not seen an analysis of MQA from him that isn’t incorrect in some way, incomplete, or misses some crucial point.
The latest blog, IMO, reaches incorrect conclusions on the filtering. Obviously he is not privy to the full MQA process (nor am I for that matter), but a simple patent search reveals that Bob Stuart and Peter Craven have moved beyond linear phase vs. minimum phase. In fact, the patent application reveals that they are very much pro-linear phase; it’s just the pre-ringing which is the problem. They have apparently figured out a way to have their cake and eat it: linear phase (for best spatial accuracy) with selective group delay (to get rid of existing ringing and to suppress ringing in down-stream filters).
Regarding the selection of different filters, that shouldn’t be a surprise, because (a) it’s public information that the mastering engineer has options in the encoding, and (b) Bob has always said that MQA is an end-to-end process. My take on this is that, if you encode a track one way, you need to select the appropriate filter during final unfold and rendering.
In terms of aliasing, what matters is whether or not it’s audible. MQA says not. The pro-camp are likely to trust MQA; the anti-MQA camp are likely not to…
MQA’s claims actually matter a lot more to the sceptics than the pro crowd. They are the ones constantly trying to pick those claims apart. It is probably also misleading to structure the argument in a way that suggests people ‘like’ MQA because they ‘accept’ the marketing. People like or dislike MQA for a myriad of reasons, but mostly people like it because it sounds good to them. A lot more are simply indifferent.
These large filters selection have a varying sound quality otherwise they wouldn’t have built-in the first place! With such a large variations which one actually sound the closest to the original performance? We don’t know, but I suspect these filters are trying to match the type of music contents out there. There’s more to come but we are getting there, which is good.
The same applies to pre-ringing. It is my understanding that there is little to no evidence that it is audible. I agree that it is better if it can be removed but why trade one supposedly inaudible problem for another.
If the consensus is non controversial in just audiophile/Hi-Fi circles, then that does not provide much substantiation. Audiophiles have been “hearing” digital playback artifacts since 1983.