MQA General Discussion

Add to that the cases where digital signals from very different recording chains get digitally mixed in a console - what does “profiling” mean here?

I think you all (Jussi excepted) want to love MQA but you’re pissed cause you’re left to wander without answers in the dark and this thread has become the place to vent frustration and tear it apart in a fruitless attempt to scratch that itch that cannot be scratched.

I find it unsettling to be told all sorts of implausible or wildly exaggerated claims that I am having a hard time making sense of. I much rather have MQA tell me “This is a proprietary process and we will not give you details - listen and then take it or leave it.”

Instead I hear claims such as provenance and tailoring the encoding to the ADC - when that is rarely possible to do, or that MQA will police who uses the certificates, which looks to be impossible the more you look at it, or that origami is a way to produce smaller better sounding files when Jussi’s analysis seems to have evidence to the contrary.

So I’d rather MQA come clean to us. The speculation here is largely due to the dodgy, overhyped PR lines that incited it.

Sure, MQA can’t say “this is good”. They can only say, “this is what the artist/publisher intended”.

I bought a book recently, it was labeled “unabridged”, but it was a crappy book, I could tell who was the murderer already in the third chapter.

But think about it… Do you really think that artists will spend time to sit down and listen to the MQA-encoded then decoded file and say “Yes it sounds like I think it should”? It’s hilarious.

For @jussi_laako to answer… I think he’s been amazingly impartial in his analysis, he’s backed up all of his claims with evidence or with arguments against non-open source file formats - all of which I agree with.

Actually, if they do use an HSM, they can constrain the use of the cert to their licensees. You can’t get the cert out of the HSM and put it on the internet.

Of course, a licensee can abuse it. Trust is never entirely a technical issue, there is a business level of trust as well.

That’s what I thought artists do, together with the producer.

Ain’t nobody got time for that! :wink: I’m interested in photography and I’ve watched many interviews with celebrity photographers. The one common theme is they get a few minutes to take a shot of an artist. I very much doubt any artist is “signing” the MQA version. The producer will sign it whatever it sounds like.

(Btw, for anybody who has not had reason to look into modern crypto technology: these capabilities come from the magic of asymmetrical encryption, where you have a public and a private key, and those keys work together. I can give my public key to anybody, I can put it up on the Internet, anybody can encrypt messages with that public key, and only I can read it because I’m the only one who has the private key. For this case, the private key is tucked away inside the HSM, and the HSM hardware is built so you can’t get it out, even if you take the hardware apart. The publisher can sign content only with access to that HSM and the private key it contains. But anybody can validate that signature using the public key, with minimal compute resources. This clever stuff underlies all internet security.)

You folks do know about the AES paper, right?

Well, my current take is mostly following:

  1. What ever deblurring there would be at mastering side, could be delivered in standard FLAC that can be decoded by all standard decoders and there’s no need for the “origami folding” stuff for it.

  2. What ever signing and authentication there is, could be technically better at the level where there is GUI and internet connectivity.

  3. Doing digital room correction with current scheme is not nice at all, because MQA is trying to prevent it.

So I have issues with the codec and DRM aspect of MQA. If they would have some DSP processing inside ADC and DAC for corrections and standard PCM delivery for content, I wouldn’t mind.

Yes, I have read it…

I ask because people are critical of Meridian for holding back information. There appears to be quite a lot of info there–much more than one typically finds for the foundational idea of a commercial enterprise.

Sure, but you can lose HSM. And since they use that kind of setup and they listed it to cost around $20k, they have setup system of “encoding house” which is a company doing encoding for you as a service.

So it is like Apple iTunes putting watermarks to the content they sell you, the content record company provided to them, that was originally recorded by someone and then mastered by someone else somewhere else. There are couple of well known mastering companies here in Finland too, like Chartmakers, they’ve been doing mastering for Rammstein for example, which in turn is published by various big record companies in different formats.

And that’s the problem. Or “abuse”, or claim to have been misinformed or whatever.

Overall, CRL system is very important part of these kind of schemes. That’s why I was proposing use of standard methods like X.509 certificate system with PKCS#1 or similar.

There are always ways, just level of difficulty and thus cost varies. Chip pirates for example use laser slicing and X-ray/electron microscopy to create chip clones.

There are also similar software solutions for doing signing and other functionality without accessing the keys, like for example one open source project I’ve been working on since about 2007… First full-blown implementation ended up in the Nokia N9 phone.

People usually start asking questions when it is about some new end-to-end delivery format that is like black box. Someone puts something in and then something comes out. Your content is in that black box and you need some black magic wand to get it out to be heard.

It is more black box than SACD, because there at least people knew the pipe throughout, although the DSD content was heavily DRM protected.

It is actually rare to have such black box content formats. Practically all the current content delivery formats that are widely used are open and well understood. Like MP3 (aka MPEG-1 Layer III), AAC, AVC (aka H.264), HEVC (aka H.265) - these are standardized by international standardization organizations where multiple industry players co-operate to create specifications for these.

And some are even free, like FLAC (used by Tidal), Vorbis (used by Spotify) and VP8/VP9 (used by YouTube). There certainly has been a lot of discussion about these too, technical and non-technical, but less questions because everybody can study how these work.

Quoting the AES paper again for anyone who missed it further up the thread.

I thought this passage could be describing what MQA are doing with temporal blurring:

“In fact sampling and reconstruction kernel pairs can be selected that resemble neural kernels tuned to ensembles of natural sounds, and which provide less uncertainty of an event’s duration.”

Could someone briefly explain what a kernel pair is in this context and whether this approach has been tried in the past ?

Edit: To answer my own question, these references assisted:

The paper also describes the reason why they add so much noise - to hide aliasing distortion from the leaky digital filter…

It means the anti-alias filter used for decimation at ADC side and anti-imaging filter used at DAC side for interpolation.

The paper contains references to other papers, but overall lot of things have been tried. But generally one thing companies systematically don’t want to tell is how they ended up with the kernel they use… :slightly_smiling:

My first take out from the MQA information and design was that the guys certainly didn’t use rock/pop/soul/jazz/blues/etc as primary test material… :smiley:

3 Likes