Noise and jitter from power or networking: musings on contracts

I have argued against the dogma that networking is always better than USB, or that SPDIF is better than USB, or that Toslink is worse than coax. And I have quietly marveled at the extremist steps on power supplies and cables of many kinds. (And so does Darko.)

I argue that it depends on the design of the device, what it is sensitive to. This kind of analysis is behind Chord arguing that optical is good, because it doesn’t carry electrical noise and while it is jittery their DACs are not sensitive to that. And why Auralic argues that WiFi is better than Ethernet.

And why we should be nuanced in choosing equipment and connections, because dogma is not always right. Try!

This triggered some musings about contracts. When we use a service, we always cared about the signature of its interface, the API or protocol, the arguments and return values. But lately, driven by the cloud service revolution, we also consider the contract (this is not new, we could and should have considered the contract in classical architectures, and some of us did, but it became widespread with cloud services). The contract described what the service promises, in terms of performance and reliability and availability, and often security and regulatory compliance and auditability, and of course cost. (The “ilities”.)

Clarity about the contract is essential. If I build a system that calls a service and expects millisecond response and the service provider built it for second response, my service can’t meet its performanc goal. If I build a system that promises five nines availability (99.999%) and I use a service that offers three nines, I must design around that situation, otherwise my contract is not fulfilled.

I think the audio industry is playing fast and loose with contracts. Consider the problem of noise coming in through the power line. Many designers build a power supply that converts AC to the DC that the device needs, but if there is noise on the line, they shrug and point fingers: “it’s not my fault it sounds bad, there was noise on the line”. But the AC line is a service; what is the contract it offers? Does it promise zero noise? Does it promise noise below a certain level? Does it even talk about noise? What is the distortion of the 60 Hz sine wave? Does your utility specify that? What is the contract about frequency stability? (Long term stability is actually quite good, because they manually stabilize it so that railroad clocks are correct, but they don’t promise anything about wow and flutter and jitter.) Most electronics designers act as if the AC service had a contract with a single frequency, zero deviation in frequency and voltage, no other frequencies. But that’s not the contract.

And similarly with noise coming in over the network. If the Ethernet contract is silent on the amount of noise carried by the cable, you have to design accordingly. Or if the contract is explicit but the promise is bad, you have to design accordingly.

Consider jitter. When SPDIF was designed (for CD transports, remember), they decided that the source should own the clock. A terrible decision, probably inspired by turntables: you have a digital technology that demands sub-nanosecond accuracy, and you try to do that by stabilizing a mechanical device? But this lead to DAC designers saying, the SPDIF contract says they own the clock, so I’ll design a DAC based on that incoming clock signal, and if it sounds bad with high jitter, it’s not my fault, you said you’ll provide the clock. Gradually, quality device vendors took responsibility for timing by reclocking. And of course, when we went with asynch USB and networking, there is no jitter by definition. But think of the difference in contract: SPDIF says, “I will provide the clock, but the stability is on a best-effort basis, no guarantees”, while the asynch techniques say “I got nothing to say about the clock, you are responsible for that”. Which contract is better? The asynch contract is more honest, the designer is not misled by undeliverable promises. In fact, they are really the same contract, because a clock with unspecified accuracy is really the same as no clock from a contractual basis, the customer has to take responsibility for timing in both cases.

So ideally, every component designer should be clear-eyed about the contracts of services he depends on, and design defensively. But that is not really realistic. To see the cost of a really robust power supply, come what may, we can look at the PSAudio Regenerators. If really weak power is rare, maybe it is better to externalize that protection so only those who need it pay for it. But noise is common, and I think every device should protect against noise.

And for those things you don’t protect against, what is the failure mode? I would like to see, the device meets its distortion specs with power between 100 V and 150 V, and shuts down otherwise.


Very enlightening. Thank you!

I think I agree with everything you have said, but can we please use the perfectly adequate word specification instead of contract ?

The lawyer in me quails at sorting out whether someone is referring to an enforceable obligation or an engineering parameter.

Audiophiles have long since ignored measured specifications due to being told by reviewers that components can measure ‘good’ but sound ‘bad’ and vice versa. As a direct result they obsess over things they think they understand. Low noise is always good, as are super stable clocks? Well, yes but in many cases it results in expensively over engineered solutions. The only harm done is to your wallet, except when you start preaching these ‘facts’ to engineers who do actually understand how components work together at a system level.
Audio dogma is difficult to shake as it is repeatedly parroted by certain manufacturers jumping on the bandwagon of the week.


I didn’t make it up, widely used in modern software engineering.
But in its defense, let me say:

First, both words are used but with distinct meanings. The specification is about what the service does and how you talk to it. The contract is about how well it does it.

Second, when using cloud services those contractual attributes are often tied into actual business contracts: you may pay for performance and availability, and confidentiality/privacy/integrity/non-repudiability may be involved in civil and criminal litigation.

I’m picking nits, the terminology isn’t really important for this context.
But the distinction is: the specification for the wall outlet is 110 V @ 60 Hz up to 15 A, but the fact that those numbers are delivered on a best-effort basis and are not guaranteed, that’s different.

EDIT Wrt enforceable obligations, like getting your money back if the cloud service is down more than 0.1 % — one key part of the contract is specifying what is not an enforceable onligation. Like AC power distortion.

I read all of this and I see no useful information for the average enthusiast. What is your point and how does it help anyone here?

1 Like

I labeled it “musings”. Not “advice”.
Thinking out loud.

But consider the observation I made in the Chronecast thread, where a 10-year old Bel Canto DAC sounded worse with a Chromecast optical feed than with SPDIF RCA from a Meridian MS600, but the more recent Chord Hugo 2 was perfectly happy with the Chromecast. This is a real, audible difference. And it is explainable, and in fact so explained by Chord’s Rob Watts, by the Hugo 2 avoiding the dependency on the jittery signal from the optical connection. The specific facts can be described as a very special case. But by identifying it as one example of a more general pattern, we may find it easier to identify and understand and avoid or remediate similar cases. Instead of considering each case as unique.

A central part of science and engineering: identifying patterns.

All of that to get to the point that TOSLINK is not all that great in general but can be okay with DACs that are less prone to issues with jitter like much of the Chord line or the PS Audio DirectStream DACs?

TOSLINK is to be avoided anyway as it is limited in bandwidth potential. The jitter it induces and the limited bandwidth are not generally mitigated by the galvanic isolation provided.

That is the pattern!

1 Like

Worth reading again as much wider points are covered with regard to design philosophy and implementation and how things are changing. I found this insight very informative and helpful.

This matches my experience.

If a component needs a special cable, interconnect or power cord then it is inadequately designed. I have gotten rid of several boat anchors in the past due to finicky behaviour.

If a component sounds different on different inputs Optical, Coax , USB, Wifi etc. with the same digital data then the component is obviously poorly designed.

I cannot understand those who choose to believe that special cables, interconnects and power cords are necessary. I believe that high end components should be well designed and reliable (for the most part or within reason as extreme dirty power can probably screw up anything). I am not saying that interface noise/jitter doesn’t exist - of course it does - it always exists, and that is why the very best component designs will be able to completely suppress interface noise/jitter below audibility.

Fortunately there is such a thing as balanced pro audio with XLR interconnects. How anyone can accept to use cheap RCA connections on a high end setup is simply beyond belief. Of course you get ground loops in RCA connections as their design is cheap and inadequate.

I concur that Chord make excellent high end DACs. Weiss is excellent too. Benchmark is also excellent (Stereophile Class A+). Benchmark DACs are not in the least finicky about power or what digital input is used - solid and reliable high fidelity sound. Surprisingly Benchmark use SMPS (Switched Mode Power Supply) - they have been able to push the operating frequency of the power supply up to 0.5 MHz and nothing gets through to the analog audible range - zip, zilch, nada - the ubiquitous low level 50 or 60 Hz noise (and it’s harmonics) are a thing of the past. Large noisy transformer cores aren’t needed. A small one with very little EM radiation will do. Isn’t technical progress wonderful?


Again… implementation is everything and it varies, case by case.

Yes SE is often used internally with a return to balanced on the output. This changes nothing about the advantages of balanced cabling between components. RCA is still a cheap connection which is why it is popular in low end applications. Ubiquitous use of RCA doesn’t make it the best approach.

You are free to dismiss everything as hand waving. It sure is easy to generalize and dismiss others isn’t it?

There are subjects such as electrical engineering, audio engineering and electronics design. Just because you admittedly have no knowledge yourself please don’t assume that all others here have no training or experience.


Hi Jeremy

You’re right - I edited my post to keep things more objective. I try to always point to what highly respected and highly qualified people say, like the Rob Watts (Chord) post I linked above. I can also point to information from another well respected and qualified designer, in favour of balanced connections.

This tells us there simply is no single best approach… so I would never make a comment like “How anyone can accept to use cheap RCA connections on a high end setup is simply beyond belief”

There are trade-offs with all parts of the design (and manf.) process/es.

We need to be careful with making claims, especially on complex subjects like this.

Apologies if I offended - not my intention.

Well that response isn’t that of someone interested in debate! Your way or the highway!

1 Like

Agreed about trade-offs. RCA is small and inexpensive connector. XLR is bulky and more costly. That there are technical advantages (improved immunity to noise and ground loops) to XLR balanced interconnects over RCA is pretty much undisputed.

Agreed that internal circuit topology of SE vs fully balanced is debatable. So your point about internal circuit topology is dead on. Good point.

1 Like

Good point. You could say it is a philosophical view whether you think designers of components should concern themselves with robustness of their products with respect to interconnections, interfaces and power. The alternate philosophy is that jitter, ground loops, EM RF noise, dirty power supply noise etc. are not the responsibility of the component designer.

For example, many folks seem quite happy purchasing $1000s of ancillary equipment like reclockers, special cables, power conditioners etc. in order to improve the performance of their components. Should this extra gear really be necessary for expensive components or only necessary for cheaper less thoroughly designed/built components?

Just a thought…

Agreed and in that post I shared, Rob Watts actually acknowledges that there are certain situations where balanced is better… high probability for the reasons you mentioned.

Cheers mate!

Now you are making me work! I preferred when I could drop a glib comment and run!
My view on the USB decrapifier vogue. It is a phase. People figure things out and the first stage is to offer those solutions to others. This brings confirmation. Once you have that, you build it into your USB fed hardware. So the reason why USB is better now than it was five years ago was because of these decrapifiers which are now not a separate box but on an isolation chip between the physical interface and the electronics. The same with power and what we are learning from super caps and noise injection. And I have a separated box that reclocks SPDIF. Chord as an example now do that to SPDIF in their DAC making them jitter immune.
So based on all of that, if these devices work on something designed in the last couple of years, the designers need shooting. Because they have either willfully or, through ignorance excluded that technology.


Agree 100%. Jitter was not understood to be an audible problem until mid 90’s. Mathematically if you assume noise from jitter is random then you don’t really have to worry that much about it and years ago many manufacturers made that very assumption. However, we now know that many forms of jitter can lead to periodic noise and some can lead to noise that is related to the music signal. We now know that what was thought to be irrelevant jitter can cause audible hash or glare even at low levels. Original thinking was a clock accurate to 1 nano sec would be enough but lately we are now approaching 1 pico sec. Now interface jitter is regarded as normal and receiver devices are expected to have a jitter tolerance mask - standards that were only developed in 1992 roughly a decade after the CD was introduced.

Of course, to most enthusiasts this is old news - jitter is much less of a problem since mid 2000’s (it took a decade for more than just a few manufacturers (like Ed Meitner) to respond to the problem and begin to slay the dragon). I believe that in the latest round of well designed DACs that jitter should be negligible in terms of audibility. Recently DACs have added a speed control to their jitter correction algorithms because it has been shown that low frequency jitter can be audible. Benchmark maintains master-slave jitter correction to below 1 Hz (to ensure no LF jitter is audible). Many DACs still lack this speed control and as the slave hunts the master it is possible that PLL related jitter corrections become periodic and may in fact cause audible jitter themselves. Even worse it is possible that the PLL system in some DACs is converting more or less random jitter into periodic jitter - the cure is worse than the disease.

Fortunately the AES J Test is a very useful end to end test signal for a DAC. The J test is an excellent stress test for a DAC and most DACs are passing this test with flying colours. The J-Test is a digital test signal that is formed by the combination of a tiny periodic LSB square wave to a massive full scale sine wave. The final output of the DAC is inspected to see if it comes out clean. The nice thing is this tests the entire system - including the analog output. Stereophile have been running these tests since late 2000s. - kudos to John Atkinson.

Note in the article how several popular players fail to achieve CD quality. Of note the Bryston BCD-1 qualifies as hi-Rez and is a bargain!

You could argue that until the press (mostly JA) started to publish results of measurements, many manufacturers were lazy about the issue and did not bother to address a problem known since 1992. Even in 2009, a great many DAC components still had significant issues. While Weiss, Bryston, Benchmark, EM Labs and a few other manufacturers were leading the way in high-fidelity it is not obvious that it helped their sales significantly. Audiophiles for the most part still preferred brand recognition (or euphonic sound) over proven performance according to measurements. Of course, a few brand recognition leaders (Ayre, Simaudio, Boulder, Meridian) were able to charge stratospheric prices because of the general industry malaise/lethargy to improve.

Finally a decade later, in 2018 we find many more reasonably priced DACs (entry prices are around $2000 for a really excellent DAC) that perform well however there are still a huge number of lemons out there.

Perhaps if the users held the industry to a higher standard then component quality would improve more quickly. As it stands, a new face plate, industrial artistic design and the marketing budget remain more heavily funded than engineering performance.

Anders, I agree completely. My musing, for what it’s worth, is that it’s not just the jitter on data incoming to the system which currently has no standard (or contract, as you put it) to meet, but the typical way an audiophile system is put together also very often has a mish-mash (or often mismatch) of specifications which might ,or might not, get along more or less well. In my experience, it is only by either very extensive (and often costly) trial and error, or sheer blind luck that all these match up and produce something accurate (or pleasing, or both). Certain quarters of the hi-fi industry and magazines thrive on exactly this - no one is ever happy with what they have, so they pursue a merry-go-round of trying to find something which matches slightly better with the other mismatched components they have.

E.g., Meridian’s analogy for traditional systems, which is outlined in their white paper is that of buying your car’s engine from one manufacturer, suspension from another, and transmission from a third and then expecting it to function as a perfectly tuned machine. My experience through decades of pursuit in this hobby, and a lot of equipment, modifications, cables, etc., etc. in the past is that there is a lot to be said for this analogy.

Perhaps the biggest issue with networked and computer audio is that the specifications were never designed with audiophile music transfer in mind. Having said that, there are ways of mitigating the worst aspects of what can screw up musical enjoyment when streaming audio, and well-designed equipment is key. If a piece of equipment which is designed for streaming doesn’t take measures to deal with the real-life problems associated with that type of signal and environment, it’s a badly designed piece of equipment. End of.

1 Like