Thanks Jussi,
I also agree with your statements. Your passion, drive and straight to the point talk impress me.
I also think the Roon team has proven that they are fantastic the way they conmunicate and deliver.
I have the feeling that a closer corporation and integration between hq player and roon would not only make your customers more happy but also yourselves.
I am a happy customer to both of you.
The DRM aspects of this are terrible, but there has been no mention that DRM is applied at all. Iāve fought with Bob Stuart about this when I worked with him at Meridian.
The key however for the A in MQA, is that Authentication requires signed data, so it can be verified that the data has not been altered.
How do you know those FLACs you buy arenāt 320k MP3s encoded as FLAC? Unless you analyze them all, you canāt always be sure. It might just be a ā ā ā ā ā ā recording. Signing the data fixes that issue.
I consider it DRM when only licensed decoders placed inside certified hardware can decode the content and the data is encrypted. All other hardware and software get degraded quality ācompatibility formatā.
So the content is placed in seemingly open format (FLAC) while utilizing second layer of encoding that is heavily protected.
In the current scheme as described you donāt know it either. You can send whatever kind of content to the encoding house for encoding. Or if you are wealthy enough to spend $20k on the crypto hardware you can do it yourself.
If for example Tidal encodes all their content with MQA you certainly are not going to know how exactly it came to be.
This can be accommodated straight in the standard FLAC without need for encrypting the additional information that only licensed decoders can decode. FLAC standard contains MD5 checksum and in addition you can place a metadata entry in the standard metadata block containing signature of the MD5 hash.
This way:
- All standard FLAC decoders can verify the hash that the data is intact.
- Slightly extended decoders can verify signature of the hash to verify the source. You can use standard X.509 and PKCS #1 together with ASN.1 standards to make it completely standard.
- Doesnāt violate any FLAC standard and the audio data in the FLAC is completely standard without trying to hide encoded information in LSBs causing quality degradation.
The end-to-end thing you speak of is the MHR stuff right? MHR has multiple reasons for existing, but the main one is that it allows them to rebroadcast data digitally, legally. Think of it as a way to overcome HDCP limitations. They needed it for their Bluray support. The labels are the oneās tying them up here.
Your solution obviously works, but there would be no record label sanctioned content to encode/sign. They are protecting the masters, since they realize itās the only value they still have. Itās a cash grab for sure, but this is the industry.
As for only the specially licensed tamper-resistent-chips being able to decode the full MQA stream, that is so they can prevent pirating of the high-res. DRM by your definition indeed, and I agree, it blows.
TIDAL doesnāt encode, they get MQA encoded masters from the labels. The keys should prove that it came from the labels, and not from TIDAL or any intermediate modifier of the data. The labels could lie to you, but then again, you gotta trust someone. There is grey area here. The masters have been lost in many cases, and then there are multiple secondary recordings. Who is to decide which is the new āmasterā ?
So there are 3 aspects to MQA:
Master (they are getting labels to pull the masters out for re-encoding)
Quality (they are providing high-res)
Authenticated (they are building a system to let a user verify that it was indeed the original master)
The Master part is important because they are getting the labels to pull out the masters.
The Quality solution just goes away with time⦠in 5 years, when all our mobile phones are 5g and bandwidth for audio is not a concern, this folding stuff they do will just be annoying legacy. Raw DSD/PCM is the way to go.
The Authentication is important, but they need a public key repository and open signatures, and not the closed ecosystem they are building now.
The problem is that you canāt get the labels to give up Master without giving them closed Authentication.
From the overall scheme I donāt see how that is ensured, unless some trusted independent third party is standing besides the process throughout. If Tidal puts out all the material they have as MQA this certainly wonāt be the case.
Unfortunate part is also that MQA encoding process seems to be heavily lossy (see the results above for 2L DXD master vs the MQA version).
This is the part I have the most argument against. Based on my analysis it uses more bandwidth with the origami folding stuff than it would at same or better quality without it (see the results above). Reason is that the encrypted folding stuff becomes just uncompressible noise for FLAC encoder and requires special decoder, while standard version is properly compressible and decodable with any standard decoder. (this is same as with any data, if you want to encrypt and use ZIP compression, you need to ZIP compress first and then encrypt, vice versa and the file is not compressible by ZIP at all)
We have already have 300 Mbps 4G subscriptions here for 50ā¬/month. And my two cheap 50 Mbps 4G subscriptions are capable of doing quite a lot already at 20ā¬/month.
There are two perspectives we can take: are the goals of MQA worthwhile, and is it effective at achieving them. Danny commented on the goals from the perspective of the labels.
But I would disagree with both Danny and Jussi on the issue of whether saving bandwidth and computer power is a worthwhile goal. (Not on the efficacy issue, Jussiās compressibility point is valid.) Forgive a bit of tech drilldown, based on my professional knowledge of computers, not audio.
I have worked on software for over forty years and we always had faith in hardware solving all our performance problems, it was only a matter of time. Mooreās law would save us, not just on compute but storage and network. But we have run up against barriers. Mooreās law is really an economic and industrial observation, but the underlying technical phenomenon is Dennard scaling and it broke down around 2006. Several reasons; one is that things are getting small enough that quantum uncertainty is a factor, but more central is energy density. In brief, we canāt make computers faster because they melt. Engineers compare chip energy density with that of rocket engines and the surface of the sun, famously inhospitable environments. In brief, we canāt make computers faster because they melt; but they are cheap so we can have many of them; but most problems are difficult to āscale outā, run on many computers in parallel, and many are impossible. These issues are all factors in the technology shift behind the cloud transition, and the reason why some big players of the previous generation wonāt make the transition.
But it applies at the opposite end too, with mobile devices. Energy consumption matters because of battery limitations. Batteries have not improved anywhere near Mooreās law.
This is why mobile devices are moving to specialized processors in hardware: they use less power for the same amount of (special-purpose) processing than software on a general-purpose processor. And as a consequence, a battery powered phone can play a high res video that would strain a PC without a GPU.
This is a radical shift, affecting both the devices and cloud-scale computing. Very uncomfortable relearnings for many of us. For example, Skype was based on a peer-to-peer architecture similar to BitTorrent and its ilk, it worked great for PCs but needed significant rearchitecture because of the phone battery issue.
We are all facing an energy issue, just like the global warming thing, and it will require new engineering trade offs.
So much for the geek excursion. In summary, I think bandwidth conservation will remain an issue. (Although when we remember the root issue of energy consumption, we shouldnāt save bandwidth by using CPU-intensive compression.)
And these technology observations are related to demographic and economic ones. Music is not only a first-world desire. And the audience is not only well-to-do, middle-aged white men who have built an audio temple at home. (I say that as a well-to-do, middle-aged white man with an audio temple at homeābut you donāt want to base product strategies on me!) People listen on the move. The young, and the third world. And me too!
So I think the goal of delivering quality sound with less bandwidth is a laudable goal and we should not be complacent that hardware will save our bacon.
(Leaving the question of efficacy.)
Think Iāve read enough now to know that Iāll simply avoid MQA.
Thinking the same thing here⦠without the content available it doesnāt matter much to me anyway, 2L is so niche and I havenāt heard about anything being signed with any of the majors for their back catalog.
Overall, Iām quite happy with the results I get with current CPU generations (35W TDP T-series Skylake) combined with nVidia CUDA offload to 9xx series GeForce GTX. Not too much heat resulting in quiet cooling and huge amount of extra processing power compared to any DAC Iāve seen so far.
Given that I can stream 4K video over 4G mobile from Netflix without issues, there is quite a bit of bandwidth available. And this without 5G which is around the corner in about two years (operators here are running test networks already).
Other than that standard FLAC is very efficient in terms of compression vs CPU usage. So no reason from that perspective not to use it for streaming. Like Tidal already is. No need for layer another codec on top of that.
For myself Iāll continue to reserve judgment until Iāve heard MQA compared with the same material unencoded. But there are three things that concern me:
-
The āorigamiā seems unnecessary in that (as I understand Jussiās earlier findings) it offers no compressibility advantage over hi-res material in FLAC format. The āone size fits allā advantage is less of an issue with various programs now supporting down sampling;
-
Authentication seems to involve substituting my trusting the publisher (whoever they may be), for MQA trusting the label and my trusting MQA. That isnāt worth the introduction of DRM under another name to me;
-
The time deblurring sounds interesting, but itās not yet clear whether it will be hardware only. If so then MQA is unlikely to take off imo. If it can be done in software in Roon and the result made available for further DSP (HQP, room EQ convolution etc.) then it could be very interesting. Given the time that has elapsed since CES when this issue seems to have first come to MQAās attention (believe it or not) Iām becoming more pessimistic about the prospects.
What kind of battery life do you get with HQPlayer on your phone?
Thereās no such version of HQPlayer, not even planned because I donāt see a point in using a phone as a primary music listening device.
I use phone to provide internet access through tethering when not at home, but thatās all. At home, 4G network access is provided by 4G-to-wired-ethernet bridge (multiple external antennas, etc).
On a laptop, HQPlayer running on Linux is certainly less battery consuming than Windows 10 with Outlook + Lync⦠![]()
However, in my listening room, on a Mac Mini. I can as well run HQPlayer together with Roon. I get lot of stuff done instead of idle cycles on the CPU.
Compressibility depends on the comparison; compared to high res, MQA is not bad.
For the 2L Arnesen Magnificat, the 352/24 4.15 GB, MQA is 671 MB. MQA:352 is 6X
For the 2L Mozart Allegro, 352 and DSD128 are 750 MB; DSD64 is 375; 44/16 is 50; MQA is 100. MQA:352 is 7.5 X.
Authentication: what distinction do you draw between the publisher and the label? Isnāt that the same thing?
Hardware vs. software ā I agree we should have it in software, but out of the whole music customer base, the segment that uses software on a computer is approximately 0.000 %. So it doesnāt speak to market success. I think market success will require easy implementation and cheap licensing for mass market DACs and for the labels. Us computer audiophiles will neither make nor break MQA. Sad reality: we donāt rule to world.
Wrt deblurring, I donāt have any feeling for the value yet. The samples I have heard have not stunned me. And to play curmudgeon a bit, I am not excited by technology advances where serious people get into a discussion of whether they can hear a difference or not. The miraculous new USB cable, the power conditioner, putting a brick on top of the DAC. When the pundits say it is like removing a veil, I keep thinking they have to sell magazines. Iām ok with subtle quality differences that require a bit of care to detect, thatās true for many things. But I have always been more interested in investing in speakers, because the differences between two speakers are dramatic. Yuuuge! When I heard Wilson XLF, I was stunned. Never been stunned by a DAC in the same way. So far, MQA has been on the power cable end of the spectrum, not the speaker end.
Of course, I was kidding.
Iām just suggesting that you are not typical, and neither am I.
You get pretty much the same size if you convert the DXD to 176.4/16 and compress to FLAC and at the same time it preserves more high frequency content.
You can check some size comparisons here:
http://www.computeraudiophile.com/blogs/miska/some-analysis-and-comparison-mqa-encoded-flac-vs-normal-optimized-hires-flac-674/
For the results I posted above (2L50, Britten: Simple Symphony, Op. 4, TrondheimSolistene), standard 176.4/16 FLAC preserving all the high frequency content up to about 56 kHz has size of 36532473 bytes. Standard 96/16 FLAC with apodizing minimum-phase filter preserving all the high frequency content up to 46 kHz has size of 20694978 bytes.
MQA FLAC preserving HF content only up to 30 kHz has size of 32008402 bytes.
Where do you envision the people not using computers end up using DACs with MQA decoding and accessing MQA encoded content? I cannot imagine such situationā¦
Many people streaming into an MQA-enabled massmarket device. MQA has to aim at phones and portable players.
Short of that, it wonāt be a mass market success, and without that, the labels wonāt care.
Labels wonāt publish much for us.
Remember, Beats completely dominates the āpremiumā ($100 + ) headphone market with 64% share.
It often is, but it need not be. By ālabelā I mean the recording company and any associated publisher. By āpublisherā I mean whoever makes the vinyl, cd or digital download and supplies it to shops or me. This may be the label but can also be a third party distributor or rights holder for a particular geographical area or technology. It may be more common for this separation to occur outside the USA.
Jussi is showing you hard evidence, and all of his claims are backed up with the data you see or he can provide. Your personal attacks are out of place.
Not an attack. Iām genuinely curious. Jussi gave a good answer.