New TIDAL tiers and MQA

And non-bandwidth limited isn’t generally a thing. While I liked @philr’s post as it was thought-provoking I’d argue that the test results don’t necessarily show what he claims.

This statement just doesn’t add up at all, if you’ve lost information in the encoding it’s a stretch to claim you have it back at either end of the process. It’s also pretty much impossible to verify.

1 Like

It does make sense if you understand the philosophy of what MQA are delivering.

Lossless in the fact:

  1. If you take away the frequencies above our hearing range and also above the frequencies that contain key micro dynamics, so circa 50khz…(as measured using peoples brainwave reactions in MRI scanners…not on some computer software or oscilloscope)
  2. You then also remove data that is below the noise floor…so effectively when you have the silent room in the studio before any instrument is played or before anyone starts to sing…you remove that data.

So yes it’s lossy in that sense but if you measure the analogue signal(which doesn’t contain any of that data as there is effectively nothing there to record) then its is the same…well…lossy to the same degree of sound travelling through air…so it’s a time domain measurement they are using as a reference.

Unless people are prepared to open their mind and think more holistic about what the real sound is…I would say the actual analogue vocal or instrument…then we will never move on.

3 Likes

All of these semantic cartwheels suggests another thread title has drifted massively off-topic. Again.

But, could I just “check my understanding” here because these two points seem like a contradiction in terms, or, having your cake and eating it:

Apart from not being sure what MRI scanners have to do with audibility, the average 30yo hearing drops off significantly above 16kHz. 50kHz. err, no. (not least because there isn’t a single amplifier or speaker in the world with the capability to reproduce it)

But at 50kHz there are “key micro dynamics” (whatever that means)?

I’m sure you can see the contradiction in your own argument.

3 Likes

Could anyone who believes in ultrasonic frequency
perception show me his or some:

  1. studio microphone 50kHz and it’s frequency response graph
  2. 50kHz speakers and it’s frequency response graph
  3. explain how to do EQ room correction to have 50kHz in listening position

I can only find those in “ultrasonic” category of microphones and speakers, not intended for audio reproduction.

Thank you…

(It’s rhetorical question just to think about those ultrasonic frequencies recording, processing and reproduction)

4 Likes

2L release all their day as MQA.

1 Like

Yep, I avoid anything from 2L like the plague for that reason.

1 Like

You’re missing some good music…

1 Like

Except for what they release as DSD.

Looks like we’ll have options: listening to their MQA ■■■■ (TIDAL) or they are listening to our ■■■■ (SPOTIFY): Spotify’s recommendation algorithm is already freakishly accurate, but the patent adds another layer of creepiness, as it involves an always-on listening device. However, the letter also argues the tech is emotionally manipulative, discriminatory against trans and non-binary people, violates privacy and data security, and exacerbates inequality in the music industry. “Music should be made for human connection, not to please a profit-maximizing algorithm,” the letter reads.

Either way it’s a very ■■■■■■ future…

This is simply mind boggling. The 2L recordings are some of the most beautifully recorded material around, very coherent spatially giving it an excellent live feel. You owe it to yourself to drop what appears to be an MQA fixation and just listen. 2L sounds good in any format you choose, although the immersive versions are top.

5 Likes

No thanks. I have scruples.

1 Like

If you have an interest in finding out how humans actually hear sound and care to read any of the research that has been undertake with neuroscience you will find that although we cannot hear above circa 20Khz which everybody knows(hence why CD uses 22.05 on that basis) we do sense it(not many know about this). It is these frequencies the 20-50khz range that give us the sense of direction and distance of sound (the micro dynamics in the sounds we hear).
This can only be appreciated by scanning the brain and playing signals including those above 20Khz and then playing those signals again but remove the signals above 20Khz and monitoring how the brain behaves…this has been proven and is readily available if you can be bothered to go looking for it.
This cannot be measured on a YouTubers computer software or oscilloscope…but could be if they had an MRI scanner in their bedroom…or lounge for that matter.

So neither of my comments are contradictory, in fact they are complimentary with regards to how we as humans actually hear sound. Go on have a read, I find it really interesting…you will need to have an openmind though. Something very few on this forum seem to have when it comes to a closed end system like MQA. Their loss in my opinion. Each to their own.:grin:. Mind you, there are those that still think the earth is flat…I’m not one of those.

3 Likes

“Proven” is, as always in science, taking it way too far. From wikipedia:

The hypersonic effect is a phenomenon reported in a controversial scientific study by Tsutomu Oohashi et al.,[3]which claims that, although humans cannot consciously hear ultrasound (sounds at frequencies above approximately 20 kHz),[4][5][6][7] the presence or absence of those frequencies has a measurable effect on their physiological and psychological reactions.

Numerous other studies have contradicted the portion of the results relating to the subjective reaction to high-frequency audio, finding that people who have “good ears”[8] listening to Super Audio CDs and high resolution DVD-Audio recordings[9] on high fidelity systems capable of reproducing sounds up to 30 kHz[10] cannot tell the difference between high resolution audio and the normal CD sampling rate of 44.1 kHz.[8][11][12][13]

Let me add that I don’t see the point of checking what MRI scanners show if it doesn’t translate to music enjoyment. Brain science is cool and all, but showing changes in the blood flow in the brain doesn’t automatically mean that 50 kHz is important in music.

4 Likes

AnalogA → ADC → DAC → AnalogB

AnalogA will never equal AnalogB (except maybe test tones). You can test this. I can test this. We all can test this.

It’s a fundamental fact of “sampling” that the ADC/DAC does.

Very confusing your comment is.

1 Like

On my TV I only watch Hi-Res video content which is expanded down to ultraviolet wavelength and up to infrared wavelength because it has impact to my brainwaves. I also installed screen wavelength filter to remove microwave and radiowave interferrence.
It costs me quite a lot but I can see the difference.
Oh wait, what cameras is director using? Can it really record infrared? And my videophile TV spec, it doesn’t seem to transmit wavelengths out of visible spectrum. It even cannot produce complete visible spectrum.
But it is expensive hi-tech so there must be better quality. I just don’t know how they do it but I swear I can hear…, ehm sorry - see it.

1 Like

I will say more - AnalogA will never equal AnalogA after one glass of wine … :wink:

2 Likes

MQA corrects for the errors/changes in the signal that the ADC introduces(otherwise it’s lossy), So therefore the output from ADC matches the input.

If you have an MQA DAC then the errors/changes that would normally be introduced by a conventional chip(another lossy process) can also be corrected for.
So the analogue output from the DAC matches the analogue signal that went onto the ADC…lossless analogue to analogue.

This can only be controlled in an end to end system…which many people seem to be against.

1 Like

Or moving your head 6 inches, if using speakers.

2 Likes

How actually does it do that, in today’s music most content is built up from digital files from demo’s lossy samples, various studio recordings, maybe analogue bounce backs all with differing ADC qualities and bit rate and sampling rates, how is MQA going to come up with the correct filter to suit this digital soup surely it’s better for the user to change the sound to suit his own listening environment with a PCM / DSD file and even apply some room correction which would probably more beneficial than a pre set filter system you have no control over and if you don’t apply the special MQA sauce you are left with a sub standard file to work with

5 Likes

Haven’t you noticed? The sub standard is the new standard!