File Formats. Experiments

Roon’s architecture completely overcomes that statement as long as the RAAT enpoint is separate from the core. I do want to explore if there is any possible difference, but that statement is like the opening pitch in a long baseball game - it has already been overcome and doesn’t address the current state of the dialog.

1 Like

I have tested this now on 4 different endpoints, playing a wav, Aiff or flac on them via Roon all sound the same using Meze 99 Classics headphones and Meze Rai Solo IEMs for the listening part fed from :

Naim Uniti Atom
Pi4 into Ifi iDSD Nano BL
NUC DE3815 in to RME-ADI2-DAC.
Hiby R5 DAP

All done now.

Sorry it didnt work out for you after all the effort you put in. I will reiterate though that it does work for me.

Nothing to be sorry about, they all sound great, not loosing out on anything.

The only way this could be true, is if there is some CPU or drive limitation reading and converting the FLAC. In other words, a lousy server. So it is possible, but very unlikely. And if it is true, then you have other issues to deal with.

Running such a test is hard. A small side-business depends on my listening tests being valid, and often i just shake my head and walk away. A few things ot keep in mind is that a 1dB difference will completely invalidate results, and 1B matching is next to impossible for most home users. I have fine-grained stepped enumerators and use a SPL meter to test.

There are also psycho acoustic reasons you should do both long and short duration tests. Short for gross differences and long for preferences and for hard-to pick-up issues like slight harshness that may only grate on you after prolonged listening.

When things appear impossible, question your test method first. if it holds up, then you might have something. Not holding my breath here.

I will comment that above someone mentioned the oft-said bit about not connecting your roon core locally to a DAC. While there are issues, this need not be a problem. I run it both ways with little if any difference - but it demands that there be no strain on system resources anywhere and galvanic isolation.

2 Likes

It is worth pointing out that, assuming both files have verifiably the same checksum, that if they sound different then the world of computing is in deep deep trouble.

For a chip to receive identical data on different occasions but to then render two different outcomes is very troubling indeed. In fact it means that many different people reading this post could all be seeing entirely different text. Or more importantly that each month your wage is paid electronically and assuming it to be the same wage each month a completely different amount could show in your account each month etc.

It also would mean that each time you played a CD on the same CD player (and of course assuming the amp etc was the same) it could sound different and this is not the case.

2 Likes

BTW, i went and read that article. Paragraph after paragraph of technical discussion, which, as an engineer who actually designs audio systems, ought not matter. See my comments below on why differences might be heard, yet not be relevant proof of anything. Back to the article – And NOT ONE USEFUL WORD on the test setup or methodology that generated the startling, contradictory-to-all-facts conclusions. It was just a matter of faith that in fact they sounded different and we should believe the author.

Crazy!

Disregard.

1 Like

I don’t know which article you’re referring to as I posted two links. That said, the evidence is presented. I can also post many more along the same lines, from professional recording and mastering studio engineers. That you choose not to believe or accept said evidence yourself does not make it something others should disregard. Encouraging others to avoid evidence be it anecdotal or otherwise, only serves to discredit your own credibility.

Encouraging others to avoid evidence be it anecdotal or otherwise, only serves to discredit your own credibility.

That would be. And I did no such thing.

I noted part of what is necessary for a valid test. I also question claims that require magic - unless we have solid clinical evidence. Maybe they are true - but that evidence i lacking. It’s the positive feedback article that gave no meaningful test methods. It prattled on about big and small endian and other things that with proper data processing will not matter.

I listen all the time. I know there are things I cannot explain that are very relevant and consistently so. Some are in fact now my own trade secrets (or maybe trade “known to very fews” LOL). But those i found after constantly questioning my own sanity. No such questioning appears here, so far.

And i have compared raw files to ALAC as well as ALAC to MP3 at various compression levels – which is VERY interesting especially when bad recordings are played and sound BETTER in MP3. But upon reflection at least this makes sense: the high frequencies, where many “nasties” exist from aliasing and poor reconstruction filters and overloaded tapes - are attenuated. Duh.

So save the judgements and please present test methods.

That would be. And I did no such thing.

Uh:

Also Endian may not be something you understand or you may not hear differences with it, but note it IS part of the audio broadcast standard: https://community.avid.com/forums/t/128975.aspx

I understand endian perfectly. Standards are set for compatibility. You can choose anything so long as its compatible. Big vs little endian is just the order with which the bits are written: 1,2,3,4,5 vs 5,4,3,2,1. Decode. Done

Well, actually, not 100% true. The transmission of bits to a DAC is a hybrid digital-analog process, contrary to popular “engineering” belief (meaning people who didn’t do the math). Simplified, the recreation of an arbitrary analog wave requires a set of Cartesian points with the y-value determined by each word (limited by bit depth) and the x-value determined by timing and limited by jitter and wander. The worst being a buffer under-run.

There are many things that can subtly affect the timing (jitter) including noise on USB or other interfaces that might impact clock recovery (in a synchronous ‘source as clock’ system); although this remains hard to quantify. Its a major problem in 100g+ transmission gear over long distances where transitions and zero-approaches become fuzzier.

We are not talking bit errors. We are talking very slight variations in the timing of the processing of the samples/words.

This is fairly well accepted and is why any decent DAC today is galvanically isolated an has some method of jitter reduction (read DAC chipset data sheets). Its also why the preferred mode is asynchronous - let the bits get read into a buffer and locked out as perfectly as possible. But note - this is only done on modern, high quality systems using USB or better yet, ethernet.

So while I’m the one who questioned the observation enough to get attacked, the basic statement that format, and how it is converted and delivered might impact jitter, and thus sound, is feasible even in a world of 100% data integrity. Its the timing integrity that’s generally in question.

2 Likes

But that shouldn’t affect a separate endpoint.

See my comment (above? I get confused on how this forum formats…). Its nto abotu bit integrity certainly - we hardly need analyzers to tell us that. If anything it is about timing integrity. Note I use ALAC and can hear no difference on a $50k+ lab (audio) system - so i agree with the conclusion but consider the comment insufficient to prove inaudibility. Only sufficient to prove bit integrity, which, since the SPDIF interface is quasi-analog, is insufficient.

This is why so many audiophiles don’t put much faith in technical facts - because they are often quoted incompletely and are therefore wrong.

let me repeat - bit integrity is not the issue. The problem is that most people arguing this don’t realize that. A proper analysis gets messy very very quickly - and keeping it simple assumes that people know things they simply don’t. I blame the “engineers” for not doing their homework before making sweeping pronouncements.

Ever wonder why Roon suggests you have a 3-stage setup with core-bridge-endpoint? Its not because bits are falling on the floor and getting eaten by the cat :slight_smile:

But it’s not about timing integrity either because USB is async. Just like network is async.

Again: Wikipedia defines “jitter is the deviation from true periodicity of a presumably periodic signal”. So with an async transmission, the USB signal is not presumed periodic, and therefore it cannot introduce jitter.

So if it is not about bit integrity and not about timing integrity, then what is it?

You are not correct Anders, USB Audio is ALL about realtime data where timing is essential. It’s just that it has some mechanisms that allow for managing buffer under- or overruns. Hence the impossibility of requests for resend etc.

USB Audio is more similar to SPDIF and analog signals than to ethernet packets off an audio file being copied over the network.
And before anyone suggest that " my excel file will be more correct with an audio optimized computer" or “my prints just got a lot more color due to the Nordost USB cable”. Thats like saying you’ve been to Woodstock festival when you actually have seen some photographs of it.

3 Likes

I largely agree with what both you and @Just_Me are saying in respect of timing issues, jitter etc potentially causing changes to SQ.

My point though is that two data streams, which are both verifiably bit perfect via an MD5 checksum, will not sound different to one another when played back on identical equipment and to suggest otherwise undermines the fundamentals of computing. At the point the data stream hits the DAC as either PCM or DSD it is purely digital.

Any timing issues etc are not the result of any differences in the data stream but rather inaccuracies of the timing introduced by the DAC and USB link and not that two fundamentally the same data can somehow sound different on otherwise identical playback kit.

Further any modern DAC with a modern USB interface will regenerate the signal anyway so things like jitter are pretty much a thing of the past as any measurement of the DAC shows.

Couldnt have put it any better :smile:

Actually I am stating the inverse. That an audio file won’t sound any better regardless of the computer feeding the data to the DAC whether ‘optimised for audio’ or otherwise. A bit perfect FLAC, AIFF or any other file is just that.

As per my post above, how the DAC handles that data file does depend somewhat on the minimisation of timing errors etc that can be induced by the DAC and it’s input but the actual file, assuming it to be bit perfect, and the computer it’s stored on are entirely irrelevant.

1 Like

But, file format is irrelevant to that if the core is streaming the already decoded PCM to a separate endpoint.

1 Like