mep - re-reading this I think I owe you an apology. Not sure why I interpreted your question as being confrontational. Mea culpa.
You're setting a good example.
mep - re-reading this I think I owe you an apology. Not sure why I interpreted your question as being confrontational. Mea culpa.
The fact is: all digital communications in the modern audio world are absolutely 100% bit perfect regardless of the cable brand or quality. Ethernet (or USB) deliver the bits fully intact from the source to the DAC. There is enough integrity in the waveform to withstand severe degradation in the signal before any corruption of the bits occur.
The fact is: all digital communications in the modern audio world are absolutely 100% bit perfect regardless of the cable brand or quality. Ethernet (or USB) deliver the bits fully intact from the source to the DAC. There is enough integrity in the waveform to withstand severe degradation in the signal before any corruption of the bits occur. Other industries use these interfaces willy-nilly and move terabytes of data around without anyone complaining of errors.
In the audio space the distinguishing factor and crucial difference in our systems is that the DAC has to ultimately convert a reference voltage level to an analog waveform ...which we hear through our speakers. Its at this point of conversion from bits-to-volts in the DAC that all that is audible is manifested. At any instant of a song, the DAC hardware dutifully tries to set the output analog level as per the input bit pattern but its only possible to do this with total precision if the reference voltage to the DAC and the power and ground rails are at a perfectly unchanging level. If they are changing (even at micro-volt level) well then damn it, we hear it and our brain dissolves the belief that the recording is reality.
And this is the 'hard to grasp' but quite obvious fact: Nasty analog noise from the source player or the AC mains is conducted along anything metal and rides digital cable (without affecting its data integrity) to the DAC to do the damage. Sure there are preventative measures in place such as RF shields, component and trace layout, improvements to the power supplies and thick metal barriers ...but audiophiles with resolving systems and good ears still hear the errors in the waveforms. Bit errors are pops, clicks and crackles. RF induced errors are what you would expect: loss of staging, depth, clarity, etc.
So, yes, better digital cables (ethenet, USB) do make a sonic difference ...but only because they allow less RF noise thru using metallurgy, construction or design. Its my opinion that cable manufacturers perpetuate the ignorance of what is really going on and in my own case, i've transformed my mid-priced, well constructed USB cable into a rock-star by adding a few dozen clamp-on ferrites. And saved many hundreds of dollars in the process.
I know this had been done to death but can anyone point me to a convincing article/blog/post which explains the science/theory/thinking/explanatiom behind how it is possible that digital ethernet cables can produce different sounds? :
“dodgy cookies”?