GigaFoil v4 Inline Ethernet Filter

puma cat: thanks for this post and all the others on this topic. lots of good information here to help us understand some of the nuances of digital signal transmission for audio applications.

after reading the john swenson material in the post several times, i have one question on the below quoted section which, perhaps, others here can address:

...The important thing to understand is that ALL digital signals carry the "fingerprint" of the clock used to produce them. When a signal coming from a box with cheap clocks comes into a box (via Ethernet or USB etc) with a much better clock, the higher level of phase noise carried on the data signal can contaminate the phase noise of the "good" clock in the second box...

...As an example if you start with an Ethernet signal coming out of a cheap switch, the clock fingerprint is going to be pretty bad. If this goes into a circuit with a VERY good clock, the signal coming out contains a reduced fingerprint from the first clock layered on top of the good clock. If you feed THIS signal into another circuit with a very good clock, the fingerprint from the original clock gets reduced even further. But if you feed this signal into a box with a bad clock, you are back to a signal with a bad fingerprint...

the question generated by the above statement is with respect to only ethernet... my understanding is that for audio applications ethernet is a method of file / data transfer and is not used to transmit a rendered audio signal. at least this should be the case for DACs with a network renderer (please, correct me if i am wrong here).

my question concerns the case where an audio file is transmitted over ethernet to a "box" which then writes the file to storage (ssd) or to memory (ram) for either later access or caching... does that stored (at rest) file then contain the fingerprint of the upstream clock? if so, it would follow that the audio file has then been changed forever and that would then violate all the ethernet protocols governing error detection and correction.

similarly, what about the case of a file transmitted (streamed) over ethernet to a box which then buffers the data for immediate use / rendering? even in this case, the presence of any fingerprint from the upstream clock would also seem to "permanently" change the file data, thereby, also violating ethernet error protocols?

thanks in advance for any comments here to help further my understanding of this.
 
Thank you for the reply and the efforts undertaken to optimise the noise reduction of Ethernet. I also had the Heimdall in mind, but now hesitate based on your experience with them. For the FMC route, each FMC needs a PSU, and the costs start to climb, so am leaning away from this complexity . I try to keep it simple with a JCAT NET FEMTO at the music server, 25m cat 5 direct to a Lumin U1. The sound is very good, great image, the Gigafoil appeals since it is one stop, and not spread out over many devices to remove that last bit of noise.

Heimdells are nice cables, but like any cable, each serve as a conduit in sound, price be damned.


Sent from my iPad using Tapatalk Pro
 
puma cat: thanks for this post and all the others on this topic. lots of good information here to help us understand some of the nuances of digital signal transmission for audio applications.

after reading the john swenson material in the post several times, i have one question on the below quoted section which, perhaps, others here can address:



the question generated by the above statement is with respect to only ethernet... my understanding is that for audio applications ethernet is a method of file transfer and is not used to transmit a rendered audio signal. at least this should be the case for DACs with a network renderer (please, correct me if i am wrong here).

my question concerns the case where an audio file is transmitted over ethernet to a "box" which then writes the file to storage (ssd) or to memory (ram) for either caching or later access... does that stored (at rest) file then contain the fingerprint of the upstream clock? if so, it would follow that the audio file has then been changed forever and that would then violate all the ethernet protocols governing error detection and correction.

similarly, what about the case where a file transmitted over ethernet to a box which then buffers the data for immediate use/rendering? even in this case, the presence of any fingerprint from the upstream clock would also seem to "permanently" change the file data, thereby, also violating ethernet error protocols?

thanks in advance for any comments here to help further my understanding of this.

Isn’t it really all about file transmission rates and their accuracy and how noise affects the path from A to B, leading to a level of file corruption? The better the file is transmitted and received, with the least path of resistance to preserve its original source, including the materials used within the cable, the better the sound quality.

I’m not sure files can be rebuilt to their original source quality once passed down the line. Something like once the car is damaged, sure it can be repaired to look as new, but never as original or bluntly stated; garbage in, garbage out.




Sent from my iPad using Tapatalk Pro
 
I am not sure if I have encountered so many variables trying to improve the SQ of my set-up as I have with the different pieces that make up my digital network. Over many months I have dealt with improvements in SQ offset by reliability (dropout) issues. Some products did not seem to want to work with each other. The sequence in which the network saw different types and lengths of cables/wires made a difference. I knew pulling new wire was a challenge. Different products improved the sound in very different ways. Will combining those products produce a sound that is better or worse than they do individually? Trying to reintroduce a GigaFOIL into my new set-up might be my next challenge.
 
Isn’t it really all about file transmission rates and their accuracy and how noise affects the path from A to B, leading to a level of file corruption? The better the file is transmitted and received, with the least path of resistance to preserve its original source, including the materials used within the cable, the better the sound quality.

I’m not sure files can be rebuilt to their original source quality once passed down the line. Something like once the car is damaged, sure it can be repaired to look as new, but never as original or bluntly stated; garbage in, garbage out.

ultraFast69: my question is specific to ethernet data transmission whose specifications / standards include error checking and correction protocols such that at the end of the process both the sending and receiving devices know exactly the same data... in essence, ethernet protocols mandate that the transmitted files are "rebuilt to their original source quality".

"Systems communicating over Ethernet divide a stream of data into shorter pieces called frames. Each frame contains source and destination addresses, and error-checking data so that damaged frames can be detected and discarded; most often, higher-layer protocols trigger retransmission of lost frames... The Internet Protocol is commonly carried over Ethernet and so it is considered one of the key technologies that make up the Internet." -wikipedia

from the swenson quote in post #375, i am trying to understand how data transmitted over ethernet can contain the "fingerprint" of upstream clocks -- which are used to time the transmitting of the data but have absolutely nothing to do with the content of the data itself? this would imply the data transmitted has been modified to include this "fingerprint" which would be in direct conflict with ethernet protocols -- and, presumably, imposible.
 
Thank you for the reply and the efforts undertaken to optimise the noise reduction of Ethernet. I also had the Heimdall in mind, but now hesitate based on your experience with them. For the FMC route, each FMC needs a PSU, and the costs start to climb, so am leaning away from this complexity . I try to keep it simple with a JCAT NET FEMTO at the music server, 25m cat 5 direct to a Lumin U1. The sound is very good, great image, the Gigafoil appeals since it is one stop, and not spread out over many devices to remove that last bit of noise.

Respectfully, no, not really. You can very effectively power each FMC with a $10.95 Jameco 5V (for the OpticalModule) or 9V (for consumer-grade FMCs) regulated linear power supply. These work quite well, actually and sound quite good. My first fiber config cost me all of $66. A $17 7M of OM-1 spec fiber is considerably less expensive (by at ~ two orders of magnitude) than a long run of audiophile ethernet cable, and....it is impervious to EMI/RFI and low- and high-impedance leakage currents.
 
the question generated by the above statement is with respect to only ethernet... my understanding is that for audio applications ethernet is a method of file / data transfer and is not used to transmit a rendered audio signal. at least this should be the case for DACs with a network renderer (please, correct me if i am wrong here).
I'm not an expert by any means, only someone who has been doing a lot of reading on this lately. The bottom line is, it doesn't really mattter whether the "connector" is a "copper" ethernet, a USB cable or a run of optical fiber....all of these methodologies transfer analog square waves; analog signals are susceptible to a wide class of noise factors, some of which have been discussed above.


my question concerns the case where an audio file is transmitted over ethernet to a "box" which then writes the file to storage (ssd) or to memory (ram) for either later access or caching... does that stored (at rest) file then contain the fingerprint of the upstream clock? if so, it would follow that the audio file has then been changed forever and that would then violate all the ethernet protocols governing error detection and correction.

For file "storage" I don't think this scenario would fall prey to the comments above because the timing, shape and phase noise do not play a role because there is nothing being streamed real-time. The data, once sent, gets converted from the analog square wave by the PHY chip and the MAC to be stored in NAND or hard-drive memory as 0s and 1s. In this case, timing and if I were to guess, clock phase noise is not a problem.

As for for storage in RAM or a cache, I would think the above would apply, but I am not a subject matter expert; better to ask John Swenson on that computer audio forum.

As best as I can figure, and this is what all the IT professionals get completely wrong with respect to digital music streaming and playback is, as Hans Beekhuzen has stated, timing is HYPERCRITICAL.

It doesn't matter when a printer printing a document or photo, or a computer display sees a dropped packet and requests a resend via the receiver; the pixel either gets printed or displayed, and we don't observe these little corrections because the output is DISCRETE: either the pixel got printed or displayed, or not. But music playback is not discrete, its continuous, and our brains are extremely aware of extremely small timing differences, differences which we can hear. Our hearing is also very sensitive to distortions. If EMI/RF/leakage currents or clock phase noise impact a digital bitstream and cause "parts" of the playback of fundamentals and harmonics to spill over into inharmonics or specific overtones, we may not like the sound of that at all. A Guanerius violin playing Vivaldi does not sound smooth, sweet and extended, it sounds steely, harsh and strident. This is, to a large degree, I think, that while LPs don't have the dynamic range of digital recordings, many listeners find LPs to a more "engaging" representation of what we perceive as what real music sounds like.

similarly, what about the case of a file transmitted (streamed) over ethernet to a box which then buffers the data for immediate use / rendering? even in this case, the presence of any fingerprint from the upstream clock would also seem to "permanently" change the file data, thereby, also violating ethernet error protocols?


Its not about ethernet error protocols, that is just a set of rules. Any analog square wave that is the actual, physical manifestation of a digital bitstream is susceptible to EMI, RF, AC leakage currents, power/ground interactions, a shift in the ground plane (which should always be "ZERO") that is the result an interaction of the ground plane with a fast transient, clock phase noise, etc. etc. Any receiver that ramps up current and voltage in requesting a packet be re-sent also means there's more noise in that circuit. Even the number, arrangement and wiring of transformers in Ethernet connections (e.g. ports and switches) matters.
 
Isn’t it really all about file transmission rates and their accuracy and how noise affects the path from A to B, leading to a level of file corruption? The better the file is transmitted and received, with the least path of resistance to preserve its original source, including the materials used within the cable, the better the sound quality.

Well, yes and no. I can't speak to the tranmission rates. However their accuracy is impacted by various noise/distortion components and the timing of them is impacted by clock phase noise.

With respect to cables, as all Ethernet or USB cables, are either copper or silver, I think the differences are more due to cable design & construction approaches, more so than materials, per se (though using PVC as an insulator/dielectric is never a good idea). Shielded cables that are connected to metal RJ45 plugs are a bad idea; they pass leakage currents on to their respective receivers.

I’m not sure files can be rebuilt to their original source quality once passed down the line. Something like once the car is damaged, sure it can be repaired to look as new, but never as original or bluntly stated; garbage in, garbage out.

I think we have to separate "storage" of digital music files from the digital "bitstream". The files are stored in NAND or hard-drive memory as 0s and 1s. The problem during sending the bitstream is not with the intrinsic data error correction functions, its all the "stuff" that gets "layered on top of" the analog squareware voltage that comprises the digital bitstream that occurs during streaming that is the problem, as near as I can figure.
 
I am not sure if I have encountered so many variables trying to improve the SQ of my set-up as I have with the different pieces that make up my digital network. Over many months I have dealt with improvements in SQ offset by reliability (dropout) issues. Some products did not seem to want to work with each other. The sequence in which the network saw different types and lengths of cables/wires made a difference. I knew pulling new wire was a challenge. Different products improved the sound in very different ways. Will combining those products produce a sound that is better or worse than they do individually? Trying to reintroduce a GigaFOIL into my new set-up might be my next challenge.

Jim, does the GigaFOIL have a clock subsystem? My guess is it has to, at some level.
 
More info from John Swenson...quoted here for accuracy and attribution

"All the optical does is block leakage, it doesn't get rid of clocking issues at all (it can actually make them worse). The fact that it is optical does not automatically apply some universal quantum time scheme that mystically aligns edges perfectly, If you send in a pulse, then another that is 50ns apart, then another at 51ns, then another at 49, that difference gets preserved at the receiver, the optical does not magically force all of them to be exactly 50ns.

The raw data coming out of the optical receiver goes into a chip that rebuilds the Ethernet signal using its own local clock, that is done with flip flops inside the chip, these flip flops behave just like any other flip flops, again no magic here. I was trying to avoid re-iterating what I have said before on this, but it looks like I'm going to have to do it anyway.

So how come this reclocking with a new clock is not perfect? As edges from the input stream go into a circuit each and every one of those edges creates a current pulse on the power and ground network inside the chip and on the board. The timing of that pulse is exactly related to the timing of the input data. The timing of the input data is directly related to the jitter on the clock producing the stream. This noise on the PG network changes the threshold voltage of anything receiving data inside the chip, especially the local clock going into the chip. This means the phase noise spectrum of the data coming in gets overlayed on top of the phase noise spectrum of the local clock. It's attenuated from what it is in the source box, but it is definitely still there.

THAT is how phase noise gets from one device to the next, EVEN over optical connections.

If you look at this in a system containing all uniformly bad clocks, you don't particularly see this, since they are all bad to begin with. BUT when you go from a bad to a very good clock you can definitely see this contamination of the really good clock by the overlaying of the bad clock. This is really hard to directly measure because most of the effect is happening inside the flip flop chip itself. You CAN see the effect on the data coming out of the flip flop.

This process happens all the way down the chain, Ethernet to USB, USB into DAC box, and inside the DAC chips themselves, finally winding up on the analog out.

Wherever reclocking is happening, how strong this overlay is depends primarily on the impedance of the power and ground network, both on boards and inside chips. A lower impedance PG network produces lower clock overlay, higher PG impedance give stronger overlay.

This is something that is difficult to find out about a particular chip, the impedance of the PG network is NEVER listed in the data sheets! I have somewhat of an advantage here having spent 33 years in the semiconductor industry, spending a lot of time designing PG networks in chips, I have some insight into which chips look like good candidates for low impedance PG networks.

On a side note, because Ethernet and USB are packet systems the receiving circuit CAN use a completely separate clock, the frequency just has to be close enough to handle the small number of bits in the packet. If it is a little to slow or too fast the difference is made up in the dead time between packets.

To reiterate none of this has ANYTHING to do with accurately reading bits, this is assumed. It IS all about high jitter on network clocks working its way down through reclockings to the DAC chips and hence to audio outs. All the work done on DACs in recent years has cleaned up the signals so dramatically that these effects are getting to be audible in many systems."
 
More info from John Swenson...quoted here for accuracy and attribution...

thanks for this information!! very helpful in putting all the pieces together.

to clarify: i am referring to ethernet transmission of an audio file itself (e.g. flac, wav, mp3), NOT the transmission of an audio file that has been decoded and rendered into a timed, digital bitstream... the later is now what i believe swenson is discussing and this distinction was the source of the confusion generating my initial question.

as a concrete example, consider an MSB DAC with an MSB network renderer module... according to MSB:

The renderer receives the audio file from the media server and creates a digital music stream, where it’s sent and converted into analog audio by a DAC...​

The renderer needs a low jitter clock and clean power, provided by the MSB DAC’s master Femtosecond clock and its multi-rail isolated linear power supply.

see this link for a great overall description of the network topology: http://www.msbtechnology.com/renderernetwork/

in this topology, an encoded audio file is sent via ethernet to the renderer module which then decodes the file and converts it into a timed, digital bitstream using the DACs onboard master femtosecond clock. here, there is no re-clocking as the MSB DAC is creating the timed, bitstream itself.

i agree that the re-clocking of a timed, digital bitstream can be influenced by upstream clocks with the amount of such influence, if any, being DAC specific. however, this still leaves open the question of whether ethernet is used to transmit this type data... anyone know of a DAC that accepts a timed, digital bitstream over ethernet? my understanding is that rendered files are usually send to a DAC via optical or coaxial s/pdif, aes/ebu, usb, etc.

finally and to further clarify, the above discussion is only concerned with data transmission and is separate from a discussion of electrical noise such as RFI, EMI, etc. that can be transmitted alongside the bitstream.
 
see this link for a great overall description of the network topology: http://www.msbtechnology.com/renderernetwork/

in this topology, an encoded audio file is sent via ethernet to the renderer module which then decodes the file and converts it into a timed, digital bitstream using the DACs onboard master femtosecond clock. here, there is no re-clocking as the MSB DAC is creating the timed, bitstream itself.

i agree that the re-clocking of a timed, digital bitstream can be influenced by upstream clocks with the amount of such influence, if any, being DAC specific. however, this still leaves open the question of whether ethernet is used to transmit this type data... anyone know of a DAC that accepts a timed, digital bitstream over ethernet? my understanding is that rendered files are usually send to a DAC via optical or coaxial s/pdif, aes/ebu, usb, etc.

The MSB renderer does not create anything as such, it just receives a signal originating from the NAS, which is transmitted via the router. Same in either MSB topology situation.

This signal can be transferred from the NAS or router via Ethernet, USB, AES, S/PDIF etc. depending on your setup. Ethernet just has the advantage, that it is using TCP/IP error correction (QoS), which ensures that the bits are received in the order they were sent.

The renderer, as would any DAC, is then re-clocking the signal.

PS: Funny thing about this MSB picture is, that it is inaccurate. The mobile device only interacts with the renderer/ DAC, i.e. Roon endpoint or UPnP client, depending on your setup. Not the router and the NAS. That is done by the DAC/ renderer.


Sent from my iPad using Tapatalk
 
thanks for this information!! very helpful in putting all the pieces together.

to clarify: i am referring to ethernet transmission of an audio file itself (e.g. flac, wav, mp3), NOT the transmission of an audio file that has been decoded and rendered into a timed, digital bitstream... the later is now what i believe swenson is discussing and this distinction was the source of the confusion generating my initial question.

as a concrete example, consider an MSB DAC with an MSB network renderer module... according to MSB:

The renderer receives the audio file from the media server and creates a digital music stream, where it’s sent and converted into analog audio by a DAC...​

The renderer needs a low jitter clock and clean power, provided by the MSB DAC’s master Femtosecond clock and its multi-rail isolated linear power supply.

see this link for a great overall description of the network topology: http://www.msbtechnology.com/renderernetwork/

in this topology, an encoded audio file is sent via ethernet to the renderer module which then decodes the file and converts it into a timed, digital bitstream using the DACs onboard master femtosecond clock. here, there is no re-clocking as the MSB DAC is creating the timed, bitstream itself.

i agree that the re-clocking of a timed, digital bitstream can be influenced by upstream clocks with the amount of such influence, if any, being DAC specific. however, this still leaves open the question of whether ethernet is used to transmit this type data... anyone know of a DAC that accepts a timed, digital bitstream over ethernet? my understanding is that rendered files are usually send to a DAC via optical or coaxial s/pdif, aes/ebu, usb, etc.

finally and to further clarify, the above discussion is only concerned with data transmission and is separate from a discussion of electrical noise such as RFI, EMI, etc. that can be transmitted alongside the bitstream.

Hi aKnight (sorry I don't know your actual name),
It took me a while to read and understand your comment above, but upon reflection, no, I don't think there are any differences with the examples I gave above to your situation or the situation or that John Swenson was discussing. In fact, I think they are one and the same.

Just to start off, all the info I posted above was specifically speaking to your your point above: "I am referring to ethernet transmission of an audio file itself (e.g. flac, wav, mp3)". Well, that file has to go to....something, either an endpont,network bridge, or render, or if it has an RJ45 port, the DAC itself.

Its exactly this tranmission of a digital audio file on some form of storage media on a computer, server, NAS, etc., to a destination, e.g., an endpoint, renderer, network bridge, or DAC that Swenson was specifically referring to. The ethernet (or USB) transmission from any point A to point B IS a bitstream, and this is exactly the situation that a number of the problems that John Swenson described occurs. The PHY chip in an Ethernet subsystem pulls the digital file from the MAC layer, and sends it as analog square waves along the Ethernet (or fiber) cable to the to the downstream Ethernet port, whether that port is an endpoint, renderer, network bridge, Ethernet suppored DAC, whatever. From Wikipedia: "the Ethernet PHY is a chip that implements the hardware send and receive function of Ethernet frames; it interfaces between the analog domain of the Ethernet's line modulation and the digital domain of link-layer packet signaling.

The very same functionality and attendant problems with EMI/RF noise, clock phase noise, galvanic isolation, etc., also applies to the USB/SPDIF path from the endpoint, renderer, network bridge to a DAC.

So, when you say: "NOT the transmission of an audio file that has been decoded and rendered into a timed, digital bitstream", with all due respect, I think this description is incomplete, and thereby, inaccurate. Any time you send a request to a digital system to send a digital file to some other "place", there is a conversion from the digital domain of link-layer packet signalling via the MAC to either a Ethernet or USB PHY chip to convert it to analog squarewares to send it to its respective destination; where its received by the downstream receiver/PHY chip. Moreover, any device that tranmits digital-sourced data, whether it be a cable modem, router, music server, NAS, Ethernet switch, renderer, network bridge, or DAC, has a clock. If the upstream clocks for the "generic IT stuff" (the router, music server, NAS, Ethernet switch) are sh*tty, then you are going to have clock phase noise added at every stop along the path from one device to another. There's only so much the clock at the renderer or DAC end can do to fix these dirty fingerprints, as John puts it.

Also, as far as I understand it, the renderer/streamer/network bridge does not do any decoding, that is the solely function of the D/A convertor in the DAC.

Lastly, I think the information from MSB is written in a way that is misleading with respect to the accuracy of what is actually happening, functionally-speaking. From what I've read above from the MSB description, I don't think the renderer is decoding anything, unless it doing some sort of transformation that is not described, like converting PCM to oversampled DSD, etc. And the renderer doesn't correct the timing, the FEMTO clock sub-system of the DAC chip does. From what I can gather above, the description from MSB above is both confusing and inaccurate, unless my understanding of the what renderer is doing is completely off-base. The written description above looks like it was written by a marketing person, not an engineer like John Swenson.

If you still have questions, you should take them directly to MSB, because I don't understand exactly what MSB is saying above; I find it ambiguous and misleading.

Sorry I can't be of more help...

-Stephen aka PC
 
Also, as far as I understand it, the renderer/streamer/network bridge does not do any decoding, that is the solely function of the D/A convertor in the DAC.

In MSB case, the renderer module does the first level conversion for MQA, the so-called unpacking. The remainder of the signal conversion is done in the DAC.

In addition, in the MSB ladder DAC architecture the clock is separated from the DAC. In an IC based DAC it would be integrated.


Sent from my iPad using Tapatalk
 
In MSB case, the renderer module does the first level conversion for MQA, the so-called unpacking. The remainder of the signal conversion is done in the DAC.

In addition, in the MSB ladder DAC architecture the clock is separated from the DAC. In an IC based DAC it would be integrated.


Sent from my iPad using Tapatalk

Thanks for the clarification. aKnight did not mention that the render module does an MQA unfold; so per my comment above, the decoding is not from digital to analog, its from one digital form to another; basically like an ALAC file container being "unpacked" to an AIFF. The MSB clock, then, is on a separate subsystem within the MSB does the timing correction before the time-corrected signal is sent to the D/A subsystem. My guess is this is because the dedicated clock subsystem has higher functionality than a clock function embedded within a DAC chip itself.

But none of that applies to the probems orginally described above of sending the digital file along an "Ethernet/optical Ethernet" path to from the server to the renderer (which I would assume is the first "port of call").
 
My guess is this is because the dedicated clock subsystem has higher functionality than a clock function embedded within a DAC chip itself.

Incorrect. Reason is simply the ladder DAC architecture, which is less integrated than an IC architecture. The signal has actually to travel further between clocking and conversion in a ladder DAC.

But none of that applies to the probems orginally described above of sending the digital file along an "Ethernet/optical Ethernet" path to from the server to the renderer (which I would assume is the first "port of call").

Again incorrect. MSB have both, an Ethernet and USB module. Thus it can apply, based on the DAC configuration.


Sent from my iPad using Tapatalk
 
Incorrect. Reason is simply the ladder DAC architecture, which is less integrated than an IC architecture. The signal has actually to travel further between clocking and conversion in a ladder DAC.

Thanks for the info. If I understand it then, because its a true ladder DAC, there is not actually a DAC "chip" that could incorporate a clock.

Again incorrect. MSB have both, an Ethernet and USB module. Thus it can apply, based on the DAC configuration.

Thus what can apply? Sorry, I still don't understand. I still don't know why all the signal degradation/failure modes, clock phase noise that Swenson has described that can occur along the path from the music file server to either MSB input (Ethernet or USB) could not have an impact on the signal integrity of the analog square waves that are being transmitted from the source to the destination.
 
Thanks for the info. If I undserstand it then, because its a true ladder DAC, there is not actually a DAC "chip" that could incorporate a clock.

They are just separated, because when ladder DAC designs were invented it was not yet possible to integrate them.

Thus what can apply? Sorry I don't understand.

The Ethernet question can apply in MSB case as well, if it is configured with an Ethernet renderer module.


Sent from my iPad using Tapatalk
 
my question concerns the case where an audio file is transmitted over ethernet to a "box" which then writes the file to storage (ssd) or to memory (ram) for either later access or caching... does that stored (at rest) file then contain the fingerprint of the upstream clock? if so, it would follow that the audio file has then been changed forever and that would then violate all the ethernet protocols governing error detection and correction.

similarly, what about the case of a file transmitted (streamed) over ethernet to a box which then buffers the data for immediate use / rendering? even in this case, the presence of any fingerprint from the upstream clock would also seem to "permanently" change the file data, thereby, also violating ethernet error protocols?

thanks in advance for any comments here to help further my understanding of this.

After FLAC decompression, the digital bits 0 and 1 are always the same, regardless of what storage media it goes through. The fingerprint theory that was mentioned, for those who believe it, is about the phase noise of the clock (i.e. jitter) that feeds the DAC, not the digital 0 and 1 that feed the DAC.
 
After FLAC decompression, the digital bits 0 and 1 are always the same, regardless of what storage media it goes through. The fingerprint theory that was mentioned, for those who believe it, is about the phase noise of the clock (i.e. jitter) that feeds the DAC, not the digital 0 and 1 that feed the DAC.

I think there are a couple misconceptions here.

In streaming, the audio signal is not stored anywhere, it is a pass through. Second, the audio stream is an analog signal, not zeros and ones. For different transport mechanisms it is just packaged in different ways.

That is similar to different audio formats, where the actual content is always the same (except for the bitrate used for storage). The packaging algorithm just differs, some are more efficient than others. The various formats have different properties, and hence create varying levels of interference with the content.


Sent from my iPad using Tapatalk
 
Back
Top