dCS Bartók Streaming DAC + HP amp

There is no concept of “sound different” with async Ethernet framing, other than the music stops playing (or drops out) due to packet loss or excessive re-transmission. There simply is no “clock” to be recovered in this case.

As you correctly pointed out earlier, jitter can be an issue with S/PDIF (over AES/EBU, coax or TOSLINK), as these synchronous transmission methods carry timing information (i.e. clock) which must be recovered by the receiver. However, using modern transmitters and receivers, it is likely that any jitter with S/PDIF is inaudible.

I always say that “should” is the most dangerous word in IT. :sunglasses: Roon itself is a great illustration of this point: the original version of RAAT ran over UDP, which following conventional wisdom, should (there’s that most dangerous word) be the preferred way to transport real time data, e.g., audio, over a TCP/IP network. TCP gains reliability by acknowledging successful data reception and retrying otherwise. For streaming data (e.g., audio or video) over a wide area network, that simply takes too long, or you need a larger buffer on the receiver. However, it turns out that RAAT was about the only use case for moving gigabytes of data using UDP in home networks, and lots of home networking gear was built assuming that never happened. So Roon switched, and the current version of RAAT runs over TCP instead. (NB: I’m simplifying here; I do know TCP/IP in gory detail, but this post is long enough already.)

While generally concur with @miguelito’s thoughts, I wouldn’t go quite so far as to say that I’d expect the interfaces to sound different, but I would allow for the possibility. Within the last year or so, I’ve tried to start reducing my box count, so I know what choice I personally would make in this situation, but that’s personal preference, and different people can disagree on that.

That’s really what I meant. I have actually tried both with Roon and detected no difference in my listening. Some others, maybe with better ears, might.

As for the convenience factor and reducing complexity, I am in your camp.

Understood. It would not be a clock issue, but it might be other issues. Consider for example whether the RAAT implementation inside Bartok is so heavy on the processor that it produces some power supply rail noise. I don’t think this is the case, I am just trying to point out that jitter generated by clock recovery is not the only source of possible sound differences.

I should add that toslink sounds markedly worse in most situations - even with today’s hardware - than coax or AES. In this case it is jitter induced by clock recovery.

One comment on dCS Mosaic accessibility for the visually impared: Both high contrast and larger text settings in the Accessibility settings on iOS are picked up by dCS Mosaic (they are not picked up by Roon). So in as much as dCS Mosaic is concerned, you can easily make it much more readable if you are visually impared.

YMMV, of course, but I think that is a misconception for modern hardware. Here is an example where that is not the case with the Topping D50s DAC, which is an inexpensive DAC at US $250 (miles away from the pricing of the elegant dCS components):

Skimming through this, I don’t see toslink being better - if anything a little worse… Whether the difference is audible or not is a different matter.

I should point out that there are implementations which effectively reclock the SPDIF signal coming in making toslink potentially better than wired SPDIF since there’s natural galvanic isolation. My understanding is that PS Audio uses such solutions. However, you always run the risk of over or under runs of the buffer as the clock from the source and the DAC will not exactly match - if the DAC is running a tad faster you can empty the buffer, or have it fill up when it runs slower. Other than a master clock orchestrating the whole system (which is what dCS does) there’s no foolproof way to remove this limitation of the interface.

As a side note, I run a Google ChromecastAudio dongle via toslink into my dCS Rossini+Master clock. I can run it two ways: slaving the clock to the one recovered from the toslink signal, or force the DAC to use the clock from the master clock, running the risk of a sound glitch. I do the latter as the sound with the master clock is better - but do run the risk of an over or under flow in the incoming buffer.

I will make another comment here… This one is a little technical but might be of interest to someone.

When running the clock in rogue-dictator mode (that’s what I am calling it when I run the DAC clock off of the master clock rather than the true source), I have NOT experienced a glitch that I ever noticed. I have a theory for this: the two clocks must be different, so theoretically they should over or under run. Depending on the size of the difference this could happen in a few minutes or hours. But if the source clock has enough drift (ie the time distribution is wide enough to include the much more precise frequency of the master clock), and if the drift happens faster than the difference of timings, then effectively the source is “recovering the buffer” as it drifts. I don’t have a full mathematical proof here, this is just my intuition.

Not having implemented a FIFO for SPDIF, I am guessing here… but you need a buffer to handle the clock drift (and instantaneous irregularities) during the song and/or album time duration. Then, you reset the FIFO at pauses, or make some correction (sample drop or sample repeat or FIFO reset when relative silence is detected) to prevent the buffer over/under-flow. I am guessing that none of these strategies would be audible.

Interesting to read the dCS patent:

Look at section 0074.

Interesting… But “resetting”: how would that work? You can’t change the DAC clock, the show must go on at that exact pace - and if there’s an underrun, there’s no way to tell the source “send me more data”.

Interesting, but as I said, the clock is the clock and the concept of varying it is precisely something you’re trying to avoid.

Specifically in dCS’s implementation, the clocks do not need to have a data feedback loop - the only cables in the Rossini Master Clock to the Rossini DAC are the two clock frequencies - 44.1 and 48. So I don’t quite understand what they are talking about in 0074.

I do know there’s a FIFO in place though, not a large one I understand.

Good find on the patent btw… Thank you for that!

I’m blind. And I can’t see anything on the screen. Voice Over is also an IOS feature that allows people who cannot use zoom or greater contrast by reading a synthetic speech to navigate the touch screen. Unfortunately, here too the problem is on the DCS Mozaic side because the icons are unsigned and it is not known what they are used for. It’s even worse in the DCS Bartok app. There is nothing you can do there because no icon is correctly described for VO. So your hint is not useful to me. I already asked in technical support letters to correct this in future versions of the application, but I did not receive a response.
Regards Robert

1 Like

Yes, that’s the point with repeating samples; it’s a hack to provide data when it isn’t there. Airplay does this to correct issues with multi-device timing. Not saying it’s a good idea, but it’s one solution.

That’s the focus of the invention (from a patent viewpoint): they closely monitor the buffer for under/over-run and then apply micro-variations to the clock frequencies (44.1 or 48) to correct for clock drift. Clever.

Probably not audible, but one could make the case that you are now altering audio playback to correct a clock problem. In real life (again, I’m guessing) clock drift is minor/sporadic and so the variations in clock frequency to adjust are very, very small. If not, get a better source component with a S/PDIF transmitter that works properly.

If I understand correctly, this is not something that works with a generic SPDIF source, hence not all the interesting frankly. If you have a dCS system, there’s one clock (if you don’t have a separate master clock, the DAC can clock the dCS source, so still not a problem). If your source is not a dCS source, you’re out of luck.

Yeah but no… If the frequency of the variation is high (very likely) then you’re effectively recreating the jitter in the process. I don’t think this would work.

More importantly, Andrew from dCS warned me not to use the master as clock if I want to ensure no glitches - hence he’s saying that the buffer doesn’t adjust as the article says - or we are misinterpreting the article.

Sure, but I am just repeating what’s in their patent.

1 Like

Interestingly… I have a microRendu (same exact software as your OR). I started using it with my Schiit Asgard 3 + Miultibit DAC (it’s my desktop computer headphone setup).

I noticed twice this week that playback simply stalls - happened twice. Basically it looks like it should start playing but simply doesn’t. I didn’t check at the time if the DAC dropped off the DAC Diagnostics in the mR but I will next time it happens. Could be a bug in the sonicorbiter software or even the current version of the RAAT library (1.1.36).