Format Conversion / RAAT

Yes. Without further evidence, that is the only ‘fact’ that you could infer. And even that makes assumptions about the test protocol. I suspect the only irrefutable fact is that it’s a bit more complicated than any of us think…

Then why are we even having this discussion in the first place? If you can’t draw conclusions from the facts provided you can’t have a discussion. If we can’t draw conclusions until all possible facts are known, there is no point in having a discussion…ever.

What are fact’s and what is fiction ?
Is what you can’t get your head around fiction just because there is no scientific evidence ?
I Agree with Andy R that listening is very subjective. All I’m saying is that I hear a difference and yes I have my preference
If we start by saying that there can’t be a difference then the topic is also closed.
Perhaps a good movie to look at to understand what I mean is “Einstein and Eddington” ?

I think we’re having this discussion because we’d both like to understand what might be behind perceived differences in codec performance when there should be none. But to understand, we need to know what the assumptions are, and what can reasonably be inferred. I don’t mind being wrong, but at present I just don’t think we know any more than ‘it’s different’.

Folks, take a deep breath… All streamers/network endpoints are small computers. They run complex realtime software to take packets from the network and send synchronous data streams to the DAC proper. Complex realtime software is often buggy. Variations in data format may trigger such bugs. Small computers are often underpowered. Underpowered computers running complex realtime software can get overwhelmed with context switches/interrupts, drop data. Such things could create sonic artifacts.

I am assuming for the sake of this part of the discussion that you are hearing a difference. What I am trying to get at is what could be causing this difference. The data is the same, the equipment is the same. What is different? The compression level is what is different. The file that sounds the best is the file with no compression. The files that sound the worst are compressed. If I go with @AndyR I cannot draw any conclusions for the facts at hand.

I have read nothing to suggest that there is any lack of computing power available or that any data is being lost. You are just adding variables that don’t appear to be in play.

So, I guess we all talked a deep breath, chalk it up to unknown variables, and assume nobody is right and nobody is wrong…

That seems clearly contrary to the spirit of audiophilia! :slight_smile:

1 Like

Oh…and believe what you want to believe because that is the truth as you see it.

Ok. I think it’s ground plane modulation.

I’m going to throw this out there. Now, note, I’ve not read or understand the actual code but from understanding other, simpler, encode/decode algorithms seems accurate.

The amount of work, the exact process, to uncompress a FLAC file is no different between any of the compression levels. The additional work only occurs at the time of compression (more cycles to find more compression). The only difference is that the machine is reading more or less data off of the disk to read a smaller or larger file.

1 Like

Correct. A computer has to work a bit harder to create (encode) a more compressed FLAC file (e.g., 8 vs 0), but the decoding of a FLAC file is essentially the same regardless or how it was encoded (8 vs 0). I find that some proponents of less compressed FLAC files are not aware of this.

1 Like

Oh, perfect hardware, perfect software, why can’t we have them always :rofl:

Second law of thermodynamics – everything unravels.

1 Like

Uh, who said perfect anything? Just nominal…

Why is it that we so often directly draw a technical conclusion true or false ?
Why don’t I read a statement from someone in the likes off :
" I have no clue why you are hearing a difference but let me test this on my equipment and I come back to you on this one "

I agree, the decode is (relatively) computationally trivial. However, if the only difference is the amount of data read in, you might expect fewer reads to be less work, which should sound better… so… more compressed sounds better?

What have you done!!! :wink:

Whether any of this is audible at the output is entirely dependent on the architecture of the system, and what the designer thought was important. It makes no difference to my ears on my system in my room…

1 Like

I have already done this test multiple times. I can hear no difference between WAV, ALAC, and FLAC on my system using the exact same PCM data. I am using an ultraRendu as a Roon Endpoint and a sonicTransporter i9 as a core. No, my 2 channel is not crappy…it is quite resolving. No, my ears don’t suck either.

1 Like

Given the end to end architecture of your system, I wouldn’t expect there to be an audible difference between formats. The processing is well decoupled, so I’m not surprised.

Well, I suspect it’s because they do have a clue as to why you are hearing a difference.