(Since I'm here for a moment...)
At the risk of exhibiting the same sort of posting behavior most of us are criticizing, I think Matt's posts could have been summarized in a much more concise fashion than they first appeared here.
I (Matt) like to use (sort of) blind testing to convince myself that for components where I think there shouldn't be a difference (e.g., cables, servers) there really isn't a difference. I don't use any ersatz blind testing to help me choose my components (not specified) where I think there should be differences. Even though I (Matt) am careful to say that my opinions only apply to my choices, my endless posts strongly imply that those opinions should apply to everyone.
^^^ That really is a good example for how difficult it is to summarize someone else's position if it's one you don't agree with. The tendency to strawman will be almost irresistible.
As I mentioned more than once in the thread, I have detected differences in blind tests for some gear. Further, I also have done blind tests on items where I was fairly confident I was hearing a difference - e.g. between my tube preamp and my Benchmark preamp. I was easily able to reliably identify the tube preamp. I wasn't purchasing on the basis of the blind test - I already owned both, and was just interested. So, no, it is not the case that I'm simply trying to confirm an opinion that I'll hear no differences in blind tests.
As I've said, I don't claim any audiophile must do blind testing and I often don't myself. I don't bother (and it's difficult anyway) in cases where audible differences are known to be the case - e.g. speakers. It's always possible some bias is influencing my perception of a loudspeaker, but I'm fine with going on my impressions because there are highly plausible sonic differences between speakers and blind testing would be entirely impractical...and major hassle. If we want to be scientists then, yes, we would want to tightly control variables to get at the bottom of understanding a phenomenon. But we can't do science all day on every choice (and who would want to?), so pragmatism makes sense. When I'm cooking and testing recipes and I add more salt...it COULD be that the dish tastes a bit more salty to me because I'm influenced by the knowledge I added more salt. But then, adding salt clearly CAN make the food taste more salty. As a practical matter, it's entirely reasonable to proceed with cooking as normal, going with impressions and plausible enhancements of ingredients, rather than engaging in chemical science and blind testing in every day life. I approach much of this audio hobby and my purchases the same way. I don't demand scientific-level scrutiny as a rule. I often can't be bothered with it. But when it comes to the areas of controversies in audio, when I'm skeptical I take a more critical look at the evidence.
Would you like to point out anything that is actually unreasonable about this position?
As to this:
Even though I (Matt) am careful to say that my opinions only apply to my choices, my endless posts strongly imply that those opinions should apply to everyone.
So you read my constant caveats that "I'm not suggesting other audiophiles need to do blind testing, enjoy the hobby any way you want" as implying the exact opposite. Don't read the words; just re-characterize it in to something you want to reject. That's a convenient way to never accept someone's argument and continue to strawman.
And, btw, in regards to "opinions that should apply to everyone" is it possible you are taking a biased view? You don't seem to have a problem with others here making claims that they present not as "mere opinion" but on the grounds they think other opinions are wrong. For instance, I've been admonished by Mike:
"Oh dear Matt, you have so much to learn young grasshopper. You can hear what you can measure, but you can’t measure what you can hear (tone, depth of soundstage, instrument separation, etc)."