Brilliant Art Dudley's article

Isn't that a question that you really need to ask Harman? They obviously feel the testing is worthwhile or they wouldn't conduct it.

I'll bet Harmon does it just a little differently than you think, because I think it's highly likely that they get worthwhile feedback about how different speakers sound and what type of sound "most people" prefer, and use this to influence their designs. Although I don't know for sure, I suspect any "training" listeners get is designed to make them aware of the differences between true live music sound and the reproduced music sound we are all exposed to every day (earbuds, PA's, shopping centers, etc). That was the impression I got from Amir's description.
 
I'll bet Harmon does it just a little differently than you think, because I think it's highly likely that they get worthwhile feedback about how different speakers sound and what type of sound "most people" prefer, and use this to influence their designs. Although I don't know for sure, I suspect any "training" listeners get is designed to make them aware of the differences between true live music sound and the reproduced music sound we are all exposed to every day (earbuds, PA's, shopping centers, etc). That was the impression I got from Amir's description.

Again, I agree that Harmon is getting something worthwhile out of this testing or they wouldn't conduct it.
 
I'll bet Harmon does it just a little differently than you think, because I think it's highly likely that they get worthwhile feedback about how different speakers sound and what type of sound "most people" prefer, and use this to influence their designs. Although I don't know for sure, I suspect any "training" listeners get is designed to make them aware of the differences between true live music sound and the reproduced music sound we are all exposed to every day (earbuds, PA's, shopping centers, etc). That was the impression I got from Amir's description.

Sorry Robert don't bet, investigate. Have you seen the training software that Harman uses? It's freely available and downloadable not to mention was discussed on that other forum. Several reviewers have also visited Harman and reported their experiences. I suggest reading that when you want to lose 30 minutes of your life.
 
It was all posted and talked about on WBF. I don't remember the threads exactly but try Sean Olive's forum.

But seriously, no one has read how they [Harman] carry out their speaker tests or seen or even tried the downloadable software to carry out their testing but are criticizing Mark?

Once one of them pounces, more pile on the heap. :| Again I will ask those who claim to love the Harman testing methodology if they love the Harman speaker brands and own them? If not, why not? If you previously owned them and sold them and bought speakers outside of the Harman brand, how come? Don't be shy, speak up. Or, if you only like Harman's testing because they are essentially the only double-blind testing game going on in this hobby and you just want to ride the double blind horse even if you don't want to take it home to your corral, let us know that too.
 
I owned Salon 2's and loved them. I preferred my D3's, so I bought them. Would I own Revel speakers again ? Absolutely.
 
Once one of them pounces, more pile on the heap. :| Again I will ask those who claim to love the Harman testing methodology if they love the Harman speaker brands and own them? If not, why not? If you previously owned them and sold them and bought speakers outside of the Harman brand, how come? Don't be shy, speak up. Or, if you only like Harman's testing because they are essentially the only double-blind testing game going on in this hobby and you just want to ride the double blind horse even if you don't want to take it home to your corral, let us know that too.

Kevin voeck's thumb print is all over Revel speakers, you cant separate the two. I was fan of his when i first heard Mirage M-1s - still a great speaker. I presently suffer from"Jax disease" you may have heard of it. im always looking for a deal on series one Salons. I recently had an opportunity to acquire wilson WP6es or Series I Salon IIs. The Wilsons appeal to the heart and the Salons to the head, they're the more neutral of the two. The Snell B-minor i now use were Kevin's swan song before he joined Revel, they're one of the best speakers no one has ever heard. they were a gift at the price they sold for new and a complete steal used. They dip into the mid-20s in my room and luvs tubes.

If you were to ask kevin today, he'd prefer SS and high res digital sources to dem his speakers with, which conflicts with my inner-audiophile but the proof is in the pudding they're great speakers.
 
I owned Salon 2's and loved them. I preferred my D3's, so I bought them. Would I own Revel speakers again ? Absolutely.

Kevin Voecks is a well-respected and talented speaker designer so it's no surprise that you liked his speakers.
 
Well, I was unwilling to waste 30 minutes since what I did listen to (about 12 minutes) didn't show me anything I didn't already know. For a non-audiophile (and maybe even some audiophiles) I think it would be useful, and I really don't see why using the skills it teaches should lead one to necessarily favor Harman speakers over other well-designed and well-made speakers.

I'd certainly be happy with a pair of Salon 2's, and know a couple of well-heeled audiophiles who own them and are quite happy with them. And apparently there are even some owners right here on this site.
 
I'm a Revel Salon 2 owner and I think they are wonderful sounding speakers and a tremendous value at their price point. I've also auditioned the F208 and was really impressed with them as well, another tremendous value.
 
I'm a Revel Salon 2 owner and I think they are wonderful sounding speakers and a tremendous value at their price point. I've also auditioned the F208 and was really impressed with them as well, another tremendous value.

+1. I owned the Salon 2's and they are fabulous. I heard the F208's as well and was very impressed - especially for the money.


Sent from my iPhone using Tapatalk
 
My last speakers were Raidho D3s. I currently have Salons 2s. Love them both and feel lucky to have been able to enjoy such great speakers. I had Revel Studio 2s before the Raidhos. All of them worked great for me, even in my small room.
 
My last speakers were Raidho D3s. I currently have Salons 2s. Love them both and feel lucky to have been able to enjoy such great speakers. I had Revel Studio 2s before the Raidhos. All of them worked great for me, even in my small room.

Interesting that you made the jump from Raidho D3s to Salon 2s. Which one do you like better and why?
 
When I visited Harman for JBL Synthesis training we toured the entire facility and it was a little overwhelming to see all the testing areas. It's obvious they have deep deep pockets. I don't think any of their testing is used as a sales tool. I think they use all of their resources for educating themselves on what the general public perceives as good sound. Then they use all of their resources and knowledge to build incredible speakers.
 
When I visited Harman for JBL Synthesis training we toured the entire facility and it was a little overwhelming to see all the testing areas. It's obvious they have deep deep pockets. I don't think any of their testing is used as a sales tool. I think they use all of their resources for educating themselves on what the general public perceives as good sound. Then they use all of their resources and knowledge to build incredible speakers.

Whatever they are doing…..It works for me, and I agree that they make incredible speakers.
 
Hi all, just joined but know most of you from WBF So this is where you are all hanging out - it's like walking into the kitchen at a party & finding everybody there :D

Haven't read Dudley's article yet but I believe that it is about DBT testing rather than personal blind testing which I think are two different things?

The case for well controlled DBTs using ITU or MUSHRA standards seems to be well established but I don't see many tests of this standard appearing in audio. The reason for these standards is to attempt to control the known biases that possibly influence the results. As a result one has to assume the vast majority of blind tests are probably biased & flawed. Even the Harmon tests seems to me to be biased by the very fact of using a single speaker for their tests.

I know the argument that goes "what's wrong with taking away knowledge of what's playing & just trusting your ears" & there is probably nothing wrong with this when applied to a self-test - we all have done (& continue to do) our own personal blind tests to verify/validate a difference that we thought was close to be definite about. The problems arrive when we hear of blind tests being run in group get togethers or rather blind listening sessions. I often see the invariable null results from these sessions being used as some "higher quality" evidence that these results are "more valid" than sighted listening sessions.

The problem here is that the inherent biases in such tests is not recognised (or not admitted to). Instead, the focus is on the fact that one obvious potential bias (sightedness) is being eliminated & therefore the test must be "more valid".

The argument often used in these group blind listening is that a sighted test which shows differences followed by a blind test which no longer reveals differences "proves" that blind testing is "more valid", "more revealing", "the truth". The suggestion being that only sightedness has changed. Actually, a lot has changed psychologically. The propensity to call it a "blind test" conditions the participants to the idea that they are under scrutiny - in exam parlance, it's like a pass/fail multiple choice test rather than a discursive essay style test.

Furthermore, second-guessing one's first thought, is a naturally arising stress factor for most people, any time an individual feels they are being tested or can be definitively exposed as being wrong in their choice - it's an ego thing. The obvious pressure to avoiding this exposure of being wrong biases the results towards no difference heard.

So what we have are sighted tests which can have a bias towards false positives & blind tests which can have a bias towards false negatives. What conclusions can be drawn about contradictory results from these two flawed tests?

In my experience sighted tests repeated over time on different days, with different music, different moods, maybe different parts of the system, allow us to get to know the characteristic sound of the device by triangulation.

I would dearly love to see positive & negative controls being used in any casual blind tests which are being suggested as being "more valid", "more true". A suggested control to use for this would be Winer's AD/DA loopback generation tests which are subtle enough but which many have reported positive results with - in ABX testing.
 
I remember looking at this essay before on Olive's blog and my interest in the published graph was slightly different to the heading of the piece "The Dishonesty of Sighted Listening Tests" (which in itself is a loaded title)
BlindVsSightedMeanLoudspeakerRatings.png


I would rename this paper to "The Dishonesty of this Paper" as it the states as it's objective:
"An important question is whether sighted audio product evaluations produce honest and reliable judgments of how the product truly sounds"

So it sets out by ramping up the psychological sighted biasing factor by using 40 of Harmon's own employees in a listening test situated in Harmon in which 3 of the 4 speakers are Harmon brand ones. We don't know if any of the employees bosses are in attendance or in sight? Is this an "honest & reliable" way of approaching the question of how much "normal" bias has on "normal" sighted listening? I think they would have been better to have framed this objective as "Do you want to keep your job or not, punk? That is the question you have to ask yourself"

The paper goes on to state:
"Psychological biases in the sighted tests were sufficiently strong that listeners were largely unresponsive to real changes in the sound quality caused by acoustical interactions between the loudspeaker, its position in the room, and the program material." Doh! This is psychological bias ramped up to the level where the choice may have a bearing on your future in the company.

They make reference to this but don't state what adjustment they made to the results for this highly stressed test "Brand biases and employee loyalty to Harman products were also a factor in the sighted tests, since three of the four products (G,D, and S) were Harman branded."

So what we see in the graphs is that there is no great flipping of choices between sighted & blind - only speaker's S preference is changed - it being preferred more in blind listening than sighted. In all other cases the relative preference for each speaker is maintained. The blind test results just depress the preference level for each speaker so they become less differentiated from one another - much closer together in preference.

Also look at how close the sighted to blind preference is for the non-harmon speaker - speaker T - pretty much no change. This is more likely a measure of the bias influence that would normally apply in people who don't have a vested interest in the outcome i.e insignificant

Given the unnatural environment of this test it seems to me to be the antithesis of what the tile claims "The dishonesty of sighted listening". Not only have they designed the test to magnify the expectation bias & psychological factors that should make the Harmon products highly favoured & therefore very different to the blind results but it would seem to me that the blind results don't actually differ as much as I would expect given this ruse.
 
I remember looking at this essay before on Olive's blog and my interest in the published graph was slightly different to the heading of the piece "The Dishonesty of Sighted Listening Tests" (which in itself is a loaded title)
BlindVsSightedMeanLoudspeakerRatings.png


I would rename this paper to "The Dishonesty of this Paper" as it the states as it's objective:
"An important question is whether sighted audio product evaluations produce honest and reliable judgments of how the product truly sounds"

So it sets out by ramping up the psychological sighted biasing factor by using 40 of Harmon's own employees in a listening test situated in Harmon in which 3 of the 4 speakers are Harmon brand ones. We don't know if any of the employees bosses are in attendance or in sight? Is this an "honest & reliable" way of approaching the question of how much "normal" bias has on "normal" sighted listening? I think they would have been better to have framed this objective as "Do you want to keep your job or not, punk? That is the question you have to ask yourself"

The paper goes on to state:
"Psychological biases in the sighted tests were sufficiently strong that listeners were largely unresponsive to real changes in the sound quality caused by acoustical interactions between the loudspeaker, its position in the room, and the program material." Doh! This is psychological bias ramped up to the level where the choice may have a bearing on your future in the company.

They make reference to this but don't state what adjustment they made to the results for this highly stressed test "Brand biases and employee loyalty to Harman products were also a factor in the sighted tests, since three of the four products (G,D, and S) were Harman branded."

So what we see in the graphs is that there is no great flipping of choices between sighted & blind - only speaker's S preference is changed - it being preferred more in blind listening than sighted. In all other cases the relative preference for each speaker is maintained. The blind test results just depress the preference level for each speaker so they become less differentiated from one another - much closer together in preference.

Also look at how close the sighted to blind preference is for the non-harmon speaker - speaker T - pretty much no change. This is more likely a measure of the bias influence that would normally apply in people who don't have a vested interest in the outcome i.e insignificant

Given the unnatural environment of this test it seems to me to be the antithesis of what the tile claims "The dishonesty of sighted listening". Not only have they designed the test to magnify the expectation bias & psychological factors that should make the Harmon products highly favoured & therefore very different to the blind results but it would seem to me that the blind results don't actually differ as much as I would expect given this ruse.

This is interesting, but says nothing about the subjects - the people involved off the street. If I was blindfolded and someone handed me three tennis rackets and asked me to pick the one that felt the best in my hand, my answer would be the same whether I can see the options or not. I have no idea what brand is better and I certainly have no tennis racket brand bias. Now, do that same test with a golf club, and I will always pick Titleist if I can see my options.

The point is, if the subjects have no preconceived biases one way or the other, of course their results will usually be the same. I'm not surprised.

I ran my own blind tests in my "big amp blind shootout" this year. I had participants who would rather chop off their left arm than buy a solid state amp. I had others who would never pick anything with tubes. Going only by sound - in my system (and that's the caveat) - they all picked the McIntosh 601's over the D'Agostino, VAC, Cary, etc. Second and third place swapped, but the 601's were always first and VAC always last. When the guy who hates solid state picked the 601's and with the blind fold on swearing up and down that was a tube amp he was listening to, I thought he was going to vomit (as a side, he picked his own amp dead last claiming it sounded like "nails on a chalkboard").....that gave us all a good laugh.

In another blind shootout, we compared DAC's (Meitner MA-1, DCS and several others). The participants were, for the most part, vinyl haters. None had vinyl rigs and even the thought of one gave them indigestion. As we cycled through various files (Redbook, high res, DSD, etc.), the MA-1 was the clear winner (this was before the Lumin), but for fun, near the end, I snuck on a record (a few seconds into the song so they couldn't detect it was an LP). Everyone started jumping up and yelling "that's the winner! Whatever that is, that's the best one by far all day. Whatever that is, I want to buy it." I was on the floor laughing. When they found out it was a record, the excuses were flying.

The point is - biases play a huge part....but you have to have biases to begin with.


Sent from my iPad using Tapatalk HD
 
You sound like a conspiracy theorist.
I should have included a pic of me typing that post at the keyboard with my aluminium foil covered head but I forgot :)
It's just my analysis of the shortcomings of that piece. I believe critical evaluation should be used in reading these things & not blind :) acceptance. Just because the word blind or double blind is used doesn't make results valid - there's too much of that goes on, in my opinion.
 
This is interesting, but says nothing about the subjects - the people involved off the street. If I was blindfolded and someone handed me three tennis rackets and asked me to pick the one that felt the best in my hand, my answer would be the same whether I can see the options or not. I have no idea what brand is better and I certainly have no tennis racket brand bias. Now, do that same test with a golf club, and I will always pick Titleist if I can see my options.

The point is, if the subjects have no preconceived biases one way or the other, of course their results will usually be the same. I'm not surprised.

I ran my own blind tests in my "big amp blind shootout" this year. I had participants who would rather chop off their left arm than buy a solid state amp. I had others who would never pick anything with tubes. Going only by sound - in my system (and that's the caveat) - they all picked the McIntosh 601's over the D'Agostino, VAC, Cary, etc. Second and third place swapped, but the 601's were always first and VAC always last. When the guy who hates solid state picked the 601's and with the blind fold on swearing up and down that was a tube amp he was listening to, I thought he was going to vomit (as a side, he picked his own amp dead last claiming it sounded like "nails on a chalkboard").....that gave us all a good laugh.

In another blind shootout, we compared DAC's (Meitner MA-1, DCS and several others). The participants were, for the most part, vinyl haters. None had vinyl rigs and even the thought of one gave them indigestion. As we cycled through various files (Redbook, high res, DSD, etc.), the MA-1 was the clear winner (this was before the Lumin), but for fun, near the end, I snuck on a record (a few seconds into the song so they couldn't detect it was an LP). Everyone started jumping up and yelling "that's the winner! Whatever that is, that's the best one by far all day. Whatever that is, I want to buy it." I was on the floor laughing. When they found out it was a record, the excuses were flying.

The point is - biases play a huge part....but you have to have biases to begin with.


Sent from my iPad using Tapatalk HD

Exactly! Good stories, btw.
I had a similar experience with a sighted test of DACs. A guy was doing auditions of some DACs to replace his Arcam CD23 CDP & invited about 5 of us along to his place.
The contenders were:
Meitner ma 1 dac
Lampizator Level 4.5 Dac
Chord Hugo Dac
dCS Purcell Upsampler and Elgar DAC (not the plus version)

This was sighted listening & there were differences between the DACs - most preferring the Meitner but to be honest they were closer in sound than I expected until we tried the dCs which nobody liked. We all knew the reputation of all the DACs & expected the dCs would shine or at least be the equal of the others but not so. So our bias towards it did not influence the result- bottom of the class

Afterwards two things emerged from the guy running the audition - his Jadis amp, he learned, throws a blanket over everything suppressing most of the differences between DACs. And the big one - the dCs was connected up wrong
 
Back
Top