Core as endpoint vs dedicate endpoint

Is Archimago some kind of Audio Saint? How do we know stuff he measures is relevant to Audio performance? To use a silly analogy, is he measuring sperm count for a patient with a heart disease?

Modern DAC tries that yes, but still a cheap transport will improve the sound of very expensive modern DACs if they are connected with USB from a vanilla computer. So whatever they modern DACs do, its clearly not enough.

This may be totally in my head, but I found that going from the cheap-o crap switching power supply to a good linear power supply noticeably improved the sound. Clean, constant voltage should mean more accurate timing, especially on something without a hardware clock (like a Raspberry Pi). I’m not using a Raspberry Pi, but I don’t know whether the system I’m using has a hardware clock or not.

He’s a scientist, not a saint. He measures the things which would reveal the kinds of problems that people keep yakking about here, and sometimes do reveal them. He explains his experiment, and shares the results. What more can you want? Are you suggesting that since he’s only human, he might be lying about the results?

So, how do we know? Education.

1 Like

I was not suggesting anything. I was asking questions.

Basically, how do we know what he is measuring has any impact on audio. Or any relevance. Maybe there are other things out there that he does not measure that do impact the SQ. I don’t know.

For example, me and a friend compared side by side a Chromecast Audio vs. Sonore ultraRendu using Roon. Same DAC and everything else. UltraRendu sounded significantly better. Not subtle. Now, according to those measurements, CCA was perfect using Roon. Bit perfect and all. So in theory the CCA should sound the same as any streamer out there. But it didn’t. Or we were both deaf or foolish

Well, you make a good point. This SQ thing is a slippery term. What most of these measurements measure is actually transparency, the “fidelity” part of high-fidelity – does what comes out match what went in, or match what was expected? And there’s no guarantee that the output will sound “good”, especially if you’re used to a particular kind of non-transparency, like the wow & flutter of vinyl records or the “warmth” distortion of a tube amp. But it can reveal whether electrical noise affects the output, or whether jitter creeps into the output signal. That’s where measurements are most useful. Whether or not something sounds “good” is highly subjective, and based on lots of non-aural factors as well as the aural ones, and there’s no good way to measure it except with anecdotal evidence.

A lot of this seems like mother’s milk to me, and I understand it may be unfamiliar to others. The past few years I’ve been working in a group specializing not only in data science, but in diagnostics of cyber-physical machinery based on time series sensor readings. So feeding the analog output of a DAC, say, into an A/D converter and comparing to the ideal model is a very familiar way of looking for flaws.

Of course, the other factor which comes into play is the increasing componentization of progress. As a technical domain agrees on the “best” way of doing things, people immediately create chips which package that agreement. XMOS, for instance, seems to have a lock on the input sides of DACs, the part which reads the USB and optical and coax, and re-clocks it and does isolation. There are what, 3?, big D/A converter chip lines. So, barring some custom pre-amp stage (and many of those have been packaged into chips as well), you’re really going to have a pretty predictable combination of some small number of off-the-shelf components in any DAC. Same for amplifiers, which really only have two components. So if you know the characteristics of those components, you can make a good guess at the characteristics of the combination.

So, this is why the measurements of the Chromecast Audio are exciting. They’re confirmed independently by both Audio Science Review and Archimago. Google seems to have done some very nice engineering using off-the-shelf components properly, to produce a cheap streamer with reasonable digital outputs, and unexpectedly hi-fidelity analog outputs, for $35. What a deal!

1 Like

Thanks Bill.

Interesting stuff ref the CCA.

There’s also issue of symbiosis between chain items to consider.

For example, Im getting the best sound Ive heard out of my Beyerdynamics coupled with a quite lowly V30 LG Cellphone. It’s not logical that it sounds better to my ears than the T1+Mojo combo.

I think that switch my pro-ject S2 from a raspberry pi to an intel nuc (which is also running ROCK) has maybe improved clarity a tiny bit - but because I cant instant inline A/B with a switch I cant be absolutely sure.

What is different however is the reliability. The S2 is way happier on the NUC than on a PI, so I can finally fully use its MQA features, try up-samping to pointless sample rates (PCM 768, DSD512) or whatever. I think the PI was often struggling to keep it fed, which is odd as it has no problems at all under identical conditions feeding an audioquest dragonfly red (similar but older/downlevel ESS Sabre 32 DAC relative to the S2 and I guess a similar but less powerful XMOS USB chip relative to the S2).

There are reasons with some digital and analog chains why one couture can be better than another for pure SQ. The obvious one is ability to keep the DAC fed in a timely manner - failure to do so resulting in dropouts/glitches etc. But I have found a subtler case of that that just seems to cause a loss of clarity (but TBH, given what I know of USB2 audio protocol and DAC design) I struggle a bit with a specific cause. Maybe the DAC clock is being forced to adapt a lot or it is being forced to resample a lot when data feed is marginal? Not sure.

USB connection noise (induced noise from computer) can possible impact the DAC depending on how good the DAC is at rejecting such noise. R-Pi is not know for being the best at this, compared to apple macs for eg which is known to be quiet good in this respect. Similar - if DAC is being power by the computer over USB, then it is even more at the mercy of the computer’s PSU and induced noise can cause jitter etc which is going to diminish what you hear.

In theory optical connection should give isolation from these power related problems, but a highly jittery source clock may be a struggle for the receiving DAC though modern designs are getting very good at coping with this, but there are still limits.

As for the CCA - I have one and for the money - they are actually very good. Of course many well known and more expensive DACs are better, but for the price you definitely cant complain. I also think that AKM chipsets in general seem have the best punch and tone (vs ESS and Burr-brown designs), even if a cheap implementation such as the CCA may lack in clarity. The problem with AKM based DACs is to get the best of them, you need to pay out a lot - RME ADI-2 for eg which of course is very good indeed.

Please stop calling me that. I was asking a legitimate question. If YOU believe the earth is flat, don’t call people who question that insane and paranoid

Good thought, Adam. So many great points in this whole thread. My evolving understanding is that the things which affect sound quality are quite different on the two sides of the DAC. The digital side is all about latency; the analog side is all about distortion.

On the digital side, latency (the time between commanding something to happen and the time its effects occur) rules. How long does it take to read a block from the NAS, or the disk, or memory? How long does it take to decompress the block? How long does it take to apply DSP conditioning to the block? How long does it take to transmit the block to an endpoint? How long does it take the endpoint to see a USB message from a DAC and respond to it?

These are all important to SQ. If your latency is too high in some parts of the system, you will get stutters, pops, clicks, even wedging of the DAC (the bit-rate/bit-depth problem). So a faster CPU will reduce the processing latency in decompression and DSP. An SSD instead of a spinning disk will reduce the time it takes to get data off the disk (though modern OS caching will also reduce it even with spinning disks). Using a local disk instead of a NAS will eliminate network transmission time, and decrease the latency of reading a block. Using Ethernet instead of WiFi in congested or spread-out areas will reduce the latency between the core and the endpoint by reducing retransmits. The only good news here is that modern computers are so fast most of these issues don’t really matter. Network capability is the main Achilles heel here.

On the analog side, the output of the DAC, things are much more complicated because there are so many ways to get distortion. This is where you need a good electronics engineer with lots of experience. (Or a known-good pre-packaged circuit in this increasingly componentized world.) If you’re building with discrete components (resistors, capacitors, inductors), you’d think it would be easy. You pull up a circuit design program on your computer, plug some components together, simulate it, and see what kind of transfer function you’ve built. But in the real world, the components you’re building with aren’t perfect. They vary by as much as 20% in either direction. What’s more, components like capacitors are notorious for (a) changing their values over time (this is why you “burn in” your electronics), and (b) dying horrible and sudden deaths after a certain number of hours of operation (mainly electrolytics). Half the flat-screen TVs junked in this country are junked because of blown capacitors, and half of those can be fixed by replacing a couple of those capacitors. I’ve got some 45-year-old speakers I’m listening to right now; I haven’t opened them up to look at the crossovers, but I’m pretty sure they don’t sound the same way they did when they were new, because the capacitor values have surely changed. That’s why high-end capacitors in audiophile gear cost so much; they have lower variance, and don’t change as much over time, and often can handle higher and more abrupt current changes (for sudden and sharp noises in the audio which require the speaker diaphragms to be quite suddenly in some other place).

What’s more, there are physical effects which don’t show up in the circuit software. For example, if you’re building a crossover circuit for a speaker, you don’t want to physically align two of your inductors, or you’ll get inductive coupling which will introduce distortion. It’s a nightmare to get this all right. Well, a PITA, anyway.

Componentization to the rescue. I’ve got a little amp which is just a box around a board like this one with a TPA 3116 inside it. TI componentized the amplifier circuitry; someone else combined that with a filtered power supply. Two components.

Anyway, the takeaway for me here is, if you’re buying for the digital side of the world, think about latency, not electrical distortion, because unless you’ve got something like an bad floating-point unit or memory errors, the bits are not going to change, no matter how expensive or cheap your computer is. The key is whether you can get them where they need to go quickly enough. Spend your money on the analog side, where the circuits inside speakers and amplifiers really matter (though amplifiers are now increasingly componentized).

1 Like

Latency doesnt matter at all - its just down to the amount of memory used for buffers. In fact more data in memory is often better (which result in more latency) because more data in memory better decouples a DAC’s I/O requirements from say disc or network I/O and thus reduces the chance of a dropout or glitch.

There is a thread somewhere asking for tracks to be pre-loaded into memory and played entirely from memory with the aim to get the maximum decoupling from I/O and I guess the assumption that a less busy CPU induces less electrical noise.

What matters most digital side (ignoring sample processing) is a perfectly stable highly accurate sample clock and the ability to keep audio data flowing into the DAC at exactly the data rate demanded by the DAC so as to keep its data buffers full.

This also loosely translates into the rest of the digital data stream chain being able to keep up as well. If part of the data stream chain can not reliably output data in a timely manner at the demanded rate (for eg because it is several thousand miles away via the internet), but can exceed the rate, then a streamer will tend to pre-buffer more data in memory ensuring that it can reliably keep the DAC fed.

For us, latency is just a user convenience that may or may not matter in a given scenario. For music listening, then the only reason it matters is our tolerance of the time from pressing play to hearing music, or time from pressing pause/stop to the music actually stopping.

Only in a system that is revealing there is an overwhelming consensus that it’s sonically better to use a core with a separate endpoint. Depends which circles you move in but if it’s better sound quality you are after you need to put this in the context of a system that will reveal the benefits. Also using a laptop as an end point would not be the best choice.

Speaking from experience of course. :blush:

I had the Core (Roon Server) connected to a DAC in my main system for the first two years using Roon. A few months ago, I moved the core mac mini and now just use Bridge for DAC connection.

I didn’t notice a difference in sound quality.

2 Likes

I use a dedicated MSI Cubi 3 fanless system as my Roon core, running Ubuntu server with low-latency kernel. I connect that via USB to a Schiit Wyrd USB cleaner, and then to my miniDSP 2x4HD acting as DAC and digital crossover for my Linkwitz LXMini speakers. This configuration sounds fantastic. It’s rock solid, and the system has enough computing power to handle DSD conversions (depending on the DAC’s capabilities), and to feed 3 other systems throughout the house simultaneously (all via Ethernet, no WiFi). The other 3 end points are Lumin D2 DAC/Streamers, so I don’t use custom-built Roon endpoints directly.

3 Likes

We had a little Singapore Linkwitz gathering with Frank from www.magiclx.com as he was in town. Amazing speakers I must say. I personally have Pluto, Orion, LX521 and LXmini :smiley:

1 Like

My Core runs on a Mac Mini (2014 Model with 8GB RAM).
Roon is the only software installed on the Mini, and the only software that’s ever run on it.
(Except maybe for Safari, which I used to download some install files.)

The Mini also serves as the endpoint for my main audio system, connecting directly to my Aqua DAC via USB.
I use an RPi/DigiOne as endpoint in my bedroom system.

As a test, I temporarily tried the DigiOne in my main system (connected to the Aqua DAC). I thought I discerned a small improvement in SQ, but it could have been my imagination.

Going direct from a dedicated Core/Endpoint Mac Mini into my DAC just makes sense to me. I would need a hear a substantial improvment in SQ to warrant a more complex/expensive setup. Not saying that possibility doesn’t exist, just that a DigiOne didn’t achieve that (to my ears).

2 Likes

I am running 3 different configs:

  1. NUC [ROCK] --> USB --> DAC.
  2. NUC [ROCK] --> WIiFi --> RPi 3 [Roon Bridge] --> USB --> DAC.
  3. NUC [ROCK] --> WIiFi --> RPi 3+Allo DigiOne [Roon Bridge] --> SPDIF --> DAC.

I made several testings and live comparisons. The best SQ is obtained with config 3. The second best SQ is with config 1 and the last one with config 2.

In the pas years, I noticed that USB link is very sensible to the quality of the source computer, the software and OS being used and its setup, and the USB cable. It is quite tricky to say why one computer or one software distribution or one whole config sounds better than the other one. Adding that each DAC may have his own USB implementation…

I can only say that in my place, RPi 3 USB output is poor compared to NUC whatever software is being used.

I shared the same conclusions on RPI USB output.

I would add that USB link is also pretty much dependant on the DAC input. I had Teddy pardo DAC, then the Chord 2Qute and now the Chord qutest. The latest one being the most robust against usb source variations.

One point here is that “just any” endpoint isn’t better than using the core. You’ll want a device that electrically as quiet as you can make it, or buy it. Presumably the endpoints that are themselves audio gear versus a version of a computer would be ideal, but I have also had a lot of success with the ComputerAudiophile.com recommended configurations. I really liked those Intel Atom embedded mobo builds from 10 years ago - still very low power and noise, and don’t need more processing power than that as an endpoint.

I use a NUC with Roon ROCK (passive cooling + LPS) as source, and Devialet 400 as DAC/amplifier.
Went through following process of improving SQ:

  1. played directly from ROCK to Devialet over wired ethernet (using dedicated switch with LPS and Audioquest ethernet cables)
  2. played over USB output of NUC via SOtM tx-USBultra (for USB signal reprocessing and reclocking)
  3. now playing via dedicated Roon endpoint with Lumin U1 mini (with SBooster LPS).

SQ improves from step 1 to 3… :wink:

1 Like