The awesome thing about a well-executed blind test is that it tends to displace the negative elements that usually influence somebody's impression of a component. Bias. Ego. Pre-conceived notions. All of that stuff flies right out the window once the visual factor is compromised. What the listener is left with is a raw sense of how a particular component *actually* performs within a given space/system. Suffice to say, this methodology can serve as a valuable tool – one that can lead to some pretty eye opening experiences!
That being said, the not so awesome thing about blind testing is that it's nearly impossible execute properly. What do I mean? Well, let’s say you want to evaluate three different loudspeakers. Somehow or another, you were able to orchestrate the ideal blind test. The whole experiment will be taking place in your room, on your system, and you’ll be listening to your favorite music throughout the entire test. Sound’s cool so far, right?
Well, there’s just one tiny problem…
Unless you (and whomever will be helping you out) own a wide variety of gear, can swap everything out on the fly without hipping you to what’s going on, and then position the speakers perfectly in your room each and every time, the entire experiment becomes moot. Why? Because you can’t disadvantage a product on one hand and call it a fair evaluation on the other.
The big problem with blind testing is that it’s only effective for super straight-forward product evaluations. I'm talking cables, stands, drivers, Xover components, racks, tweaks, etc... Once you move onto more complex components, it becomes increasingly difficult to accommodate each products unique needs. And if you can't accommodate each products needs, then the whole test becomes compromised.
Anyway, at the end of the day, I feel like the truth lies at the intersection between the pro-blind testers and the anti-blind testers. The methodology can be very useful. However, it shouldn't be looked upon as the only valid way of deducting the performance of a component.
That being said, the not so awesome thing about blind testing is that it's nearly impossible execute properly. What do I mean? Well, let’s say you want to evaluate three different loudspeakers. Somehow or another, you were able to orchestrate the ideal blind test. The whole experiment will be taking place in your room, on your system, and you’ll be listening to your favorite music throughout the entire test. Sound’s cool so far, right?
Well, there’s just one tiny problem…
Unless you (and whomever will be helping you out) own a wide variety of gear, can swap everything out on the fly without hipping you to what’s going on, and then position the speakers perfectly in your room each and every time, the entire experiment becomes moot. Why? Because you can’t disadvantage a product on one hand and call it a fair evaluation on the other.
The big problem with blind testing is that it’s only effective for super straight-forward product evaluations. I'm talking cables, stands, drivers, Xover components, racks, tweaks, etc... Once you move onto more complex components, it becomes increasingly difficult to accommodate each products unique needs. And if you can't accommodate each products needs, then the whole test becomes compromised.
Anyway, at the end of the day, I feel like the truth lies at the intersection between the pro-blind testers and the anti-blind testers. The methodology can be very useful. However, it shouldn't be looked upon as the only valid way of deducting the performance of a component.