Found this on the web:
Transmission line impedance is critical in some applications, and not so critical in others. In analog audio, particularly, impedance is basically a nonfactor--because at the relatively low frequencies involved in analog audio, and at anything approaching ordinary lengths, any reasonably designed cable will effectively "pass through" the impedance of the devices at either end--and the input and output impedances of line-level analog audio devices themselves are usually not critical. For analog audio cables, other design considerations like shielding and capacitance may be very important, but impedance really is not.
But the behavior of cables changes as signal frequencies increase. This is so because as frequency increases, the electrical "wavelength" of a signal becomes shorter and shorter; at video frequencies, signal wavelength is short enough to start causing problems. As the length of a cable becomes closer to a large fraction of the electrical wavelength of the signal it carries, the likelihood of significant, picture-altering reflections from impedance mismatch increases. The whole cable can resonate at the wavelength of the signal, or of a portion of the signal, and the impact on signal quality will be anything but good. Video signals, too, are complex; they occupy not a single frequency, but a whole range of frequencies--this is why we so often speak of the "bandwidth" of a signal--and so a mismatch will affect different parts of the signal differently.
Because the effects of impedance mismatch are dependent upon frequency, the issue has particular relevance for digital signals. Where analog audio or video signals consist of electrical waves which rise or fall continuously through a range, digital signals are very different--they switch rapidly between two states representing bits, 1 and 0. This switching creates something close to what we call a "square wave," a waveform which, instead of being sloped like a sine wave, has sharp, sudden transitions (in practice, the "square waves" in digital signals aren't really quite square). Although a digital signal can be said to have a "frequency" at the rate at which it switches, electrically, a square wave of a given frequency is equivalent to a sine wave at that frequency accompanied by an infinite series of harmonics--that is, multiples of the frequency. If all of these harmonics aren't faithfully carried through the cable--and, in fact, it's physically impossible to carry all of them faithfully--then the "shoulders" of the digital square wave begin to round off. The more the wave becomes rounded, the higher the possibility of bit errors becomes. The device at the load end will, of course, reconstitute the digital information from this somewhat rounded wave, but as the rounding becomes worse and worse, eventually there comes a point where the errors are too severe to be corrected, and the signal can no longer be reconstituted. The best defense against the problem is, of course, a cable of the right impedance: for digital video or SPDIF digital audio, this means a 75 ohm cable; for AES/EBU balanced digital audio, this means a 110 ohm cable.