RAAT and clock ownership

Thank you for the correction Brian. So what does this imply for the quality of clocks (all of them) in a RAAT implementation, other than the quality of the DAC’s clock?

For it’s starting to sound like we don’t need to be overly concerned about the quality of any clock but the DAC’s, because RAAT is sorting that out for us. Is that a true statement? Would be awesome news, if so.

Thanks.

Yup, that is correct.

Awesome! That’s a MAJOR selling point for RAAT (and Roon) IMO. Thank you sir. :slight_smile:

2 Likes

I’m glad someone else had the questions I had and that they were professionally answered. Good job Team Roon!

I think that this “answers the mail” for the most tragically technical. Now, I am looking for exploitation. :smiley:

@brian and @danny One question that has always troubled me about this (since CD players arrived in the 80s): the original design, even part of the SPDIF spec, is that the source controls the clock. I have always thought this was supremely stupid, as it led to those super-expensive CD players with heavy platter mechanisms. You want pico-second stability for a digital system, and you try to achieve that with a mechanical device? I always felt that this was influenced by vinyl turntables.

And Meridian indeed used a high-speed DVD mechanism and read it asynchronously.

Why didn’t they do that? Why was SPDIF so backward?

Specifically, in your case, why not use a pull model, where the remote requests data as it needs it?

I realize this wouldn’t support multi-zone sync. Is that the only reason? Or there other reasons a pull model wouldn’t work?

Agreed about S/PDIF. I have guesses as to why they “got it wrong”, but I don’t know for sure.

Roon/RAAT are based on a pull model. I’m not sure where my explanation went wrong to cause confusion.

(Meridian’s streaming protocol is also a pull model).

For multi-zone, we run in pull mode with the zone that has been elected as clock master and push to the other zones, which are forced to compensate for drift internally.

The actual implementation is slightly indirect. The core knows how quickly to send packets to an endpoint not because of an explicit request for data, but because the core knows what time it is at the high-resolution clock that’s driving the audio stream, and it understands the intended relationship between wall time and stream time. It behaves like a soft real time system based on those time relationships + periodic synchronization with the endpoint clock.

This turns out to be much more elegant than explicit pull requests, and lets single-zone and multi-zone cases share virtually the whole implementation. There is one extra API for slave zones, which basically tells them to go sync with a clock source of the server’s choice and adjust accordingly.

(The Meridian protocol actually uses explicit “pull” requests and has explicitly different flows for single and multi-zone cases. That works too, but that protocol has about 2.5x as much surface area, a lot more chatter, and the lesser-used multi-zone paths never got enough testing and were/are a constant source of trouble.).

2 Likes

That’s very elegant. Thanks.

(Your explanation was quite clear, I skimmed it and asked too quickly.)

Very interesting insights and discussion ! Thank you very much @Brian and @danny.

I was also interested by What’s Wrong With UPnP

Would be very usefull to write a synthetic short paper with simple diagrams (for the dummies like me :smiley: ) to explain Roon / RAAT architecture and protocol, differences with others solutions (mainly UPnP / Airplay and perhaps LMS) and how these technicals points conducts to a better global user experience.

1 Like

@Brian - thanks for confirming that. But now I’ve had a chance to digest it a bit more, I wonder… might that be incorrect in any situation where you have a RoonReady endpoint (aka streamer) with a S/PDIF connection to the DAC?

I would assume that - in that case - all RAAT benefits would stop at the endpoint, and the quality of the S/PDIF communications with the DAC would be governed by the normal set of S/PDIF concerns. Is that true?

So, to make sure I’ve got you correctly. In this chain, Roon Computer > ethernet> Squeezebox > coax S/PDIF > DAC, the S/PDIF out from the Squeezebox is being clocked by the Squeezebox and then asynchronously sampled at the DAC?

Yes, the S/PDIF signal delivers samples at is own pace. Asynchronous resampling is one technique used by DACs to cope with that, but there are others, too.

It’s possible to simply ignore the problem. This is a consumer-grade solution, but you can totally just use the incoming S/PDIF signal to drive the whole process and ignore (or omit) the internal clock.

Some DACs slowly adjust their internal clock to adapt to the incoming rate, using a small buffer to prevent overruns/underruns, and then re-clock the data out. I know that Meridian products work this way but they are not the only ones. I took a guess that MSB used a similar approach (knowing that they make a ladder DAC) and their marketing materials suggest that I’m correct.

DACs that already have a big over-sampling stage built in can reconcile the clock discrepancy in their existing resampling process. The technical documentation for the ESS Sabre, a very common chip in USB DSD-capable DACs, discusses this in section III-B of this document. My understanding is that this is the most common approach for sigma-delta based DACs.

Why is this OK? Because this sort of conversion does not materially harm the signal quality since the oversampling ratio is very high. The ESS Sabre is re-sampling your signal asynchronously to something like 40mHz. That is much less significant to quality than going from 44100->44100.005.

I would assume that - in that case - all RAAT benefits would stop at the endpoint, and the quality of the S/PDIF communications with the DAC would be governed by the normal set of S/PDIF concerns. Is that true?

Of course. S/PDIF is still S/PDIF. There is no way to do anything about that from where we sit.

I keep ignoring this case because S/PDIF is legacy technology. There is no saving it from the limitations that were baked into its original design. It doesn’t support DSD without encapsulation, it has rate/bitdepth limitations that don’t admit all modern formats. It’s source clocked.

1 Like

Thanks Brian. Agreed too. Was not remotely trying to imply that Roon should fix something out of its control. Just trying to make sure that my understanding was complete. Thanks.

improved my understanding too.
Thanks Brian/Danny and all.

6 posts were split to a new topic: Grouped Playback exhibiting Clock Drift

This discussion raises some questions for me. I apologize if I have failed to understand the above-posted information.

I am currently using a Musical Fidelity V90 DAC being fed via USB by an i3 NUC running Roon Remote. I will be upgrading the DAC in the next year or two, and some of the ones I’m looking at have USB inputs and others don’t (SPDIF, AES/EBU). From the above discussion, it seems that converting the USB signal to SPDIF would add a clock to the chain, potentially robbing RAAT/ROON of full clock ownership. Should I stay away from non-USB DACs?

The conundrum for me is exacerbated by the fact that the Musical Fidelity DAC will only convert up to 96kHz streams when fed via USB, but will do up to 192 if fed SPDIF. I was considering adding a USB-to-SPDIF converter as an interim step to a DAC upgrade (and to see what the Musical Fidelity can do with that level of hi-rez - curiosity).

For example, if I was to eventually upgrade to a Berkeley Audio DAC, I would probably also use their Alpha USB to convert to SPDIF of AES/EBU, as their DACs don’t have USB inputs. Would I be compromising sound quality by converting the signal to SPDIF before it gets to the DAC that way?

Thanks for any guidance you can provide.

A lot depends on how the DAC handles clock. If the DAC has its own USB interface, it could be operating purely asychronously and using a free-running clock for the DAC chip. Audio quality will not be dependent on clock jitter, because there will be little, if any.

If the DAC uses an S/PDIF input (AES/EBU, TOSlink, and coax S/PDIF are all the same – use whichever you like best – I prefer TOSlink for isolation) then the clock must be derived from the incoming stream. A good DAC, like the BADA Alpha DAC, will use a PLL to lock its own internal low-phase-noise clock to the incoming data stream. No loss of quality. And adding an external USB-to-S/PDIF converter won’t make it any better – or worse.

If the DAC extracts the clock from the data stream without regenerating it (PLL + local oscillator) then it is possible that the quality of the clock in the S/PDIF could have an effect on the sound. I just don’t know how to find out how the DAC processes the clock. If the DAC has two separate clock oscillators then it probably phase locks a local clock to reduce jitter.

I am currently playing with the Breeze Audio DACs available on eBay for $60. They look like they have promise. At $60, what have you got to lose? They go up to 192/24.

At the moment it is somewhat arbitrary, there was a discussion that it should be user selectable, as only the user can state what the “best room” is for them.

I’ll see if I can find it.

Perhaps the simplest way to force the “master” would be to use the device that “owns” the zone as the master. The devices that are added to the zone to create a group are the slaves.

Hi Brian,

I found the discussion and have split it out to its own topic to improve focus and make easier for others to find.
User Selection of the Master Clock in Grouped RAAT Zones.
In which Brian discusses "first zone as the master ", have a read and if you have any comments / questions can you post them in that topic.

1 Like