Why is 5m the maximum recommended length for usb2 cable? Is digital data not data via USB?
Iām not sure where youāre going with this. Every type of digital cable has a maximum length, depending on the type of signaling and the max frequency. Itās still all or nothing, so if you exceed those lengths, you start getting more and more ānothingā and less and less āallā. Also, USB can work with or without error correction. With audio, itās usually without.
This āisā where I am going, and what I am trying to find answers too.
With TCP itās truly all or nothing.
Iām assuming if you go beyond the maximum recommended length of cable with other methods of digital transfer, there is complete signal loss, or no transfer at all.
More ānothingā, and less and less āallā suggests there is grey areas between all or nothing (signal degradation or noise) as cable length is extended, but before it reaches itās maximum length. Or perhaps shorter (micro) bursts of all or nothing.
If the memory buffer (via TCP) runs out of packets there is a definite period of ānothingā. With USB, AES,spdif,etc. there is never periods of ānothingā.
If USB (audio) works without error correction, those errors must then be arriving at the memory buffer. If there is error in the sent signal, how can the memory buffer perform itās job perfectly (as it does with TCP). What does the memory buffer have as a reference to check against with a usb transfer?
As I am achieving constant audio signal on my system via USB transfer, does the Memory buffer work on different levels (almost like levels of error correction when ripping)? Are all memory buffers created āequalā? Are they installed āequalā? Is there noise pollution in overworked memory buffers? I struggle to believe that there are no performance limitations to memory buffers in an electrical environment, even itās ability to reject signal. Almost everything on earth has performance limitations and degradation.
We have deviated a long way from where I was thinking this thread subject would go but it is interesting none the less. Everyone seems to agree TCP is the best method of data transfer (assuming the network is up to it) which implies that other methods of transfer are imperfect. How this affects the audio signal, at any point of the signal chain, is what I am interested in.
The unknown unknowns.
Thank you all.
No. It just may fail to achieve its nominal bandwidth (how much data it can transmit per second with a nominal BER (bit error rate)) if you exceed the specs. The digital signalling protocols are designed for particular operating parameters, one of which is the cable length.
Youāre misunderstanding what the memory buffer does. It just stores the bits until the software is ready to put it through the DAC. The signalling protocols do any error checking thatās available.
There are of course performance limitations to everything. Part of the engineering design task is to select memory technology so that the performance limitations are never exceeded. Though memory buffers are pretty simple and well-understood. Probably hard to go wrong there.
Iām not sure who āeveryoneā is, nor have I taken a poll there.
TCP over Ethernet is fast and error-free. TCP over WiFi can be less fast, but still error-free. UAC2 over USB with short cables is typically error-free, though a BER of less than 1 bit in 10**20 is allowed. Not sure what the specs are for S/PDIF (which is just a consumer version of AES); they were the first digital audio protocols designed, so more recently designed protocols (UAC) may be superior, as theyāve been able to profit from experiences with S/PDIF. On the other hand, there has been lots of design experience with S/PDIF.
No. Digital transfer happens in packets, and when the signal-to-noise level reaches a certain threshold, you start losing packets. Itās still all or nothing, but at packet level. In certain modes - e.g. when you copy files - USB works just like TCP: it detects errors and retransmits bad packets. If that still doesnāt work after a few retries, it fails the whole transfer. In other modes, it just drops failed packets, or simply does not detect errors. Also, the max length recommended is for correct operation, not when it breaks down completely. If you use say a 2m cable in your home, the chances of undetected errors is astronomically small.
All this has nothing to do with memory buffers.
Thank you very much Marian (and Bill) for your time help. This is what I was looking to be confirmed.
Sorry, I misread the answer earlier about filtering protocols and memory buffering.
Can reaching this signal-to-noise threshold be caused by (analog) noise?
Well, yes, because weāre talking about signal-to-noise ratio, and noise is always analog.
Didnāt early streaming tech rely on UDP not TCP. I know Roon did for a while for RAAT in earier builds and dropouts happend for some, they moved to TCP a while later launch if I remember.
Would it really matter for purposes of this discussion though? With UDP you may lose some packets permanently, instead of getting them retransmitted, but that still gets you dropouts rather than any sound quality changes⦠UDP packets are checksummed just as well, so you still either do get them correct, or you donātā¦
As @Marian says, yes.
But I think the issue of electrical noise is vastly over-hyped in audio fora. Itās not the 1950ās any more. Consumer electronic devices are usually noise-free these days, for a number of reasons. Current and voltage levels used in integrated circuits are miniscule compared to those used in discrete component devices of the 1950ās and 1960ās. Moving parts arenāt there anymore. Internal voltage regulators have been made very stable and almost ripple-free. And noise is wasteful; reducing power draw is a prime consideration, and wasting that power generating noise is a big engineering no-no. On top of that, the proliferation of useful electronic devices means that noise has been curtailed by regulatory fiat, and the FCC tests things to see that they comply.
So as long as one keeps oneās data cables away from oneās speaker cables and oneās garage door opener (not the remote, the actual motor), itās unlikely noise will intrude.
Thank you for confirming what I have long suspected.
To summarize for those who havenāt put all the bits together yet, or the keyboard warrior forum experts for whom their perception of absolute fact has now slightly changed, in (non TCP) audio digital transfer, analog noise can affect the signal transfer due to packets dropping or not being error corrected.
For years, on various forums, and even earlier in this thread, I have been ridiculed or shot down for even suggesting or questioning the issue of digital audio transfers. āData is dataā, 'bits are bits", āwhat makes you think audio is different from other data filesā etc,etc. Anecdotal comments on File transfers, banking, stock exchange, and global internet security, that have nothing to do with the subject I have raised have been fired back at me. And Iām the (audio) fool?
Is it possible that all modern equipment designs are capable of rejecting all the (common and differential) noise? Some of the best audio recordings are from a time before microwave ovens, never mind all the modern gadgets we now have in our lives. Are the design improvements in modern equipment enough to counter the increased noise levels we now encounter?
Well, what my previous post was trying to say is that ambient EMI noise has decreased, not increased. Not sure where youāre getting this idea that itās increased.
All? No, for both of your āallsā. No absolutes anywhere. Thereās badly designed and badly built equipment nowadays, like always. Luckily for us, thereās also less noise.
So after all thatās been said here you concluded that bits are not bits and data is not data? That is indeed ridiculous. Getting corrupt bits at the end of a USB cable is a pathological case, not the norm, and certainty not something youād expect in a home environment.
Usually when everyone tells one that one is wrong, it is not because one is the only person in the world who knows the truth
And maybe it is because bits are actually bits, data is data, and except for some edge cases analog noise has very little effect on digital audio reproduction.
Besides, UAC2 USB Audio protocol does have checksums as well, so the DAC can know, if it cares to, whether the data has been received accurately or not. Unless you are using an audiophile, directional, $500/foot USB cable you are likely to experience one flipped bit in several days of non-stop listening. For all practical purposes it is irrelevant.
Of course, even if data errors were much more prevalent, sort of having some kind of Maxwell daemon flipping bits ājust rightā the result would be clicks, clacks, and dropouts, not any kind of less inkier blacks, reduced soundstage or tiring highs, or whatever other āwife heard it from the kitchenā effects are supposed to be alleviated by 10,000% profit margin cables and other doodads.
It is, of course, possible to design, either out of malice or sheer incompetence, a device with no noise rejection whatsoever, and many a boutique high-end company succeeds at it quite well, but this is a special case that has no bearing on anything that has been designed by actual engineers within reasonable budget constraints.
Sorry Marian, perhaps my grammer should have been better but you have misinterpreted what I meant by that line. Itās the line Iāve always had thrown back at me when I suggested that, as you have confirmed, the signal as a whole could be affected by noise. I wasnāt referring to the packets.
I understand that Data is data and bits are bits, but as you said yourself, when a SNR threshold is reached, some of those packets can be dropped or non error corrected. Some will no doubt even dispute your explanation.
Iām grateful for the time and (plain speaking) answers you have given me around the noise issue, where most others are merely dismissive. A quick Google search will find articles and white papers on the affects of SNR on networks, but as I am not computer minded, the terminology means little to me.
I have not claimed to have golden ears, a wife that can hear a difference from the kitchen, or posted any personal subjective claims of audio sound quality. I merely wanted to know that analog noise can have effect on a (Non TCP) digital audio signal, and why this might be.
If packets are dropped, it does not matter. They get re-transmitted from the last interface they left prior to the issue causing the drop. If they are not error corrected, they fail checksum and get dropped and retransmission occurs again. Itās robust and works.
Noise does not get written to the buffer. The transmission of packets relies simply upon voltage states changing within the transmission medium. The value of the voltage is not important provided it is within an approx 20% tolerance (5 voltage states, or brackets, on Gigabit as an example). Once the voltage is sensed by the receiver, it is given a value according to the bracket it arrived in. Once the packet is reassembled and the checksum evaluated, if one of those voltage states was shifted into another bracket by interference, then the packet fails checksum. Retransmission occurs.
Um, ok. Itās only logical that given any electrical transport, there will be some amount of noise that will disrupt it irreversibly. No engineer will deny that. What you need to take away is that digital is way, way more tolerant to noise than analog, to the extent that, for all intents and purposes, a digital stream is bit perfect. I want to emphasize this, because my impression is that youāre just trying to reinforce some of your preconceptions by asking obvious questions.
For example, if I asked you if planes can crash, and you gave me the obvious answer (yes, they can), I could conclude that flying is really dangerous and choose to drive instead. The reality is that flying is by far the safest mode of transportation, way less dangerous than driving.
Agreed, I have been a bit like a dog with a bone, but not really a preconception as I have to admit it is something I read elsewhere, not really my own idea. Just had to get to the marrow.