Can I connect Roon Nucleus directly to DAC, thus avoiding a streamer?

Yes, they do. However, would these mechanisms have no effects on jitter and thus SQ is the question.

Now that you mention it, I think you’re right. I’ve looked at the configurator settings and default is a downsampled 96 KHz, although you can specify 192 (not bothered too much about this). So much for my cunning plan, but it does sound very different! Thanks for pointing this out.

For those who are interested, Hans Beekhuyzen, who I follow on YouTube explains how 0s and 1s can go wrong, and thus impact on SQ. The link below is the written version Hans published, he has linked to his video version inside.

1 Like

I have my PC connected directly to my Chord Qutest over USB and then to McIntosh C48 and MC402 to Dynaudio speakers. I did some comparisons by building a raspi streamer with USB to Qutest, using a fancy USB pc card, and using my old Squeezebox Touch (coax to Qutest and also USB). I can’t hear a difference between any of those when playing the same track. All sound equally good to me in my system.

1 Like

Thanks,
With all your glorious equipment it is interesting to see that you are letting the digital signal pass through your pc vs having an Ethernet connection pass the signal through a ‘more quiet device’ before heading to your chord dac.

Richard, I had same idea. My setup it Roon Core, Synology NAS, Airalic Aries V1 streamer and Mytek Brooklyn DAC+ . I tried to remove Airalic Aries G1 and connect Roon to Mytek directly. Result was very surprising. Sound was really degraded. It was kind of thin, hollow sound. Since I have both Tidal and Qobuz, I tried streaming. Result was slightly, just slightly better sound. Mind you I use TP Link media converter, so long distance (about 25 feet) from where my NAS and router are, to Auralic and/or Mytek is fiber optic cable (thus, no RFI nor EMI). I have no logical explanation why this happened, but I have tried it on three separate occasions with the same result. Whether Femto clock in Airalic does better job than Mytek or something else I really can’t say. Together Airalic and Mytek work perfect. I hope this might give you another perspective to the subject

3 Likes

There’s an enormous amount of outdated and sometimes just plain wrong information that circulates in so-called “audiophile” circles. Someone should write up a comparison of the various digital signalling techniques that exist.

Hans, in that article and video, is basically talking about an old, outdated failure mode that shouldn’t happen any more. (I see that he brings in one of his pieces of equipment as a prop; in his videos, he stacks those props behind him as he speaks.)

Here’s my understanding of the situation involving transmission of digital information. If I’m mistaken about some part of it, let me know.

For those who find it TL;DR, my advice is: just use USB.

The CD and accompanying players came out in 1982. Early players incorporated D/A conversion and output an analog signal, but there was an immediate desire to get the bits out of the player so that after-market DACs could be merchandised. In addition, the industry wanted a means of digital transmission that could interoperate with professional equipment. So a bunch of audio engineers, the AES, along with a broadcasting society, the EBU (this is why the interface is sometimes called AES/EBU), got the AES3 standard together and published it in 1985. It can carry two channels of PCM audio (that is, stereo) over several different media. It was designed for interconnect of professional equipment.

Sony and Philips, the purveyors of the CD format, in the same time frame were developing the Sony/Philips Digital Interface, what we call S/PDIF. They made it use the same digital transmission protocol as AES3, but with less expensive connectors, suitable for consumer products. There is one significant protocol change, the channel status word, which in S/PDIF normally operates in “consumer” mode and carries copy-protection information from sender to receiver. S/PDIF is designed to carry 20-bit audio streams; 24-bit data can be carried with “extra bits”, but those bits may legitimately be ignored by the receiver. Finally, S/PDIF sends data using the sender’s clock; the receiver must synchronize with this clock, making jitter something of a concern. Modern S/PDIF interfaces, though, typically re-clock the inputs and make jitter almost irrelevant.

One thing to note about the AES/EBU and S/PDIF protocol is that there is almost no provision for error detection. There is a per-word parity bit, and a CRC check for the channel status word, but the data integrity is otherwise unprotected.

About ten years later, a consortium of computer manufacturers decided that the RS-232 serial protocols, commonly used to communicate between workstations and low-data-rate peripherals like terminals and modems, were outmoded,when applied to more modern peripherals like keyboards and mice. They came up the Universal Serial Bus as a replacement for RS-232. Intel produced the first devices supporting this, and the 1.0 version of the standard was released in 1996. Devices supporting this really only came out in 1998, following the release of USB 1.1.

Note that this USB standard was “born digital”; that is, the folks working on it were all digital people with digital data mentalities. As such, data errors were of significant concern. So they added error detection and a re-send protocol to the standard. And if you’re using “bulk mode” (for disk drives), or “interrupt mode” (for keyboards and mice), they are used to correct transmission errors using the re-send protocol. However, the USB Audio Class 1 and 2 protocols are layered on top of USB’s “isochronous mode”, so no re-sends are supported; data is streamed from host to device at a constant rate. But error detection is supported. So the device can know to discard damaged packets.

So suppose you do get a bit error, and the packet is discarded. What do you lose? Isochronous mode USB on a USB 2.0 connection runs at 8 kHz (8000 USB “micro frames” per second), so each packet contains 125 microseconds of audio information. That’s six samples, for a two-channel 24bit/48kHz PCM audio stream. Can you hear this? There is likely to be a discontinuity in the waveform, which might manifest as a pop or click.

How common are packet errors? That depends on the hardware and cables. Often it depends mainly on the host (sender) USB device tree design. There’s an excellent discussion of this in a Darko article from 2016, in italics, towards the end of the article. Tests of USB Audio, such as this one by Archimago, suggest that USB Audio packet errors in properly built equipment are very rare, even though such data transmission is not guaranteed to be bit-perfect. In fact, there is almost nothing in the USB hardware stack that would cause such errors; they are almost always due to very rare external events such as cosmic ray hits.

USB Audio class 2 also contains support for controlling common device functionality from the sender, like adjusting the volume of the device, or perhaps switching between the filters of a DAC.

What about the electrical coupling issues with USB? USB was originally designed for peripherals which were considered part of the computer, like keyboards and mice. As such, it was thought to be a good idea for everything to share a common ground. In addition, it was designed to be able to supply small amounts of power to similarly limited peripherals. Modern equipment will isolate most of this. In fact, the most important thing to remember is that these failure modes are by now well understood, and by now basically engineered out of modern equipment.

So when I look at all this, to me USB is newer, designed by competent digital engineers rather than audio engineers, supports a variety of protocol levels, supports device control, has massively higher bandwidth than S/PDIF or AES, and is better at error detection. And the concerns about errors and jitter are, I conclude, mere FUD goblins promoted to sell unnecessary hardware.

17 Likes

May be I missed by I do not see apart from error free data being transported across USB that they arrived at the correct timing (jitter free) for digital audio is clock sensitive. Do you have info in this area? Sorry if I missed.

Looking for problems caused by jitter with USB is much like looking for unicorns in a zoo. I can’t say there isn’t one somewhere, but no one who’s looked for them has found one. Jitter tolerance is designed into the protocol.

3 Likes

Comparing to what I don’t know, what I know is insignificant. However, I can say at least digital audio reproduction includes bits, clock, and electricity. By the time bits leave storage entering into the reproduction pipeline they are prone to degradation.

I can also say nothing can help squeezing more fidelity out for what already stored in the bits, however, the efforts are to help avoiding degradation, be it in the digital domain or analog, AES or SPIDF, USB or Ethernet, copper or optical.

With the concept of “bit perfect”, I am curious about the scope of what it refers. I can say when bits are stored, and then replicated to another storage, or like Hans said, replicate the bits to Mars, bit perfect in this scope must stand; however, when the bits enter the reproduction pipeline, it is two different things. Clock and electricity must be taken into account to justify bit perfect or to say it has nothing to do with the digital domain. Otherwise, we are merely believers of degradation would not happen.

I am a believer of a perfect world does not exist, as least from the technology standpoint. I chose Ethernet as I have had heard the difference, at the same bit rate the DAC renders analog audio, at least on my system. I was curious about what made the difference “if” after all bits are perfect, and I still am. Having said that, I would not rule out the possibilities that using USB on another system would sound better or at least yields better impressions to another person. As Hans always suggests, let your ears tell you what you like, and enjoy the music.

1 Like

Some signal degradation/jitter/clock slip may happen over a transatlantic cable, or an uplink to Mars, but it isn’t really a thing with a 1m long silver plated interconnect. How on earth does the modern world even function if I can’t successfully get a signal from a disk drive to my hifi in the same room? @Bill_Janssen is a 100% correct, some of these “audiophile” myths and legends really need to be buried. If there’s an audible difference between DAC inputs, it has more to do with the internal circuitry than the transport medium or protocol.

2 Likes

Thanks for the long detailed explanation!

1 Like

Yes, that’s perfectly true inside the actual DAC hardware, typically a chip. But of course these chips have been continuously improved year after year, as well, to eliminate failure modes.

Most of this common trepidation about clocks, electricity, and other such is a holdover from the issues with audio equipment of the 1950’s and 1960’s, when purely analog circuits had all kinds of interesting pitfalls. Some of those came back with the switch to digital in the 1980’s, but were then dealt with in pretty short order.

Hans has to sell ads for overpriced and in some cases useless audio equipment, so he’s careful to keep things in the uncertain zone. I’d say instead, ignore him, and buy stuff from firms that are (1) big enough to be able to afford a competent engineering staff, and (2) down-to-earth enough to not be selling you stuff you don’t need. Look at measurements. If someone tells you there’s some kind of problem with something, demand evidence (measurements) before accepting their story.

If all you want is to hear something you like, take drugs and listen to the silence.

7 Likes

Timing is the key. If I only care replicating bits to Mars, I know I have to wait, at least with technology today :sweat_smile: For reproducing audio from bits, time/clock is then critical, and it is different from replicating bits to Mars. Given I know error corrections to certain levels exists, hardware, protocols, etc. Needing less correction work be done would be beneficial, if after all the corrections would restore all errors at the required speed, again clock and timing for digital audio reproduction.

Excellent summary however long , good coffee time reading

Thanks for the clarification

I have ROCK on a NUCi7 with a SSD storing my music files. I connect this to a Cocktail Audio X45, sometimes by USB, sometimes not and using my ears I prefer the USB connection. I have no idea whether this is technically better or worse than the other connections I try, I go by what my ears tell me. :slightly_smiling_face:

I watched most of his videos, and compare to what I read in forums, at least from digital audio reproduction technology perspective. If he is the person I should avoid, I should also avoid many community members.

It is fascinating to see how discussions are based on discrediting people, avoiding timing/clocking/jitter in digital audio reproduction as the key factor, and the suggestion to the use of drugs which is utterly unacceptable.

No comment.

Hey, if feeling good about what you’re hearing is the ultimate goal, there’s no quicker solution! In fact, there’s even a thread here on drug use:

I personally prefer the stuff from the Napa Valley, but that’s a California thing.

3 Likes

It’s not what drugs you’re strung out on they care about as much as whose. - Todd Snider

1 Like

Fiber network will also be helpful in avoiding various jitter and noise issues.

BTW: Hope to se Nucleus with SFP(+) interface :grinning:

1 Like