MQA Discussion

I wouldn't call it a must watch. He actually said: "in his opinion, MQA has less loss than PCM & DSD". Must be magic pixies.
I thought it was a must watch and funnier than anything on Comedy Central. MQA has "less loss" than PCM or DSD because...Hans said so. Never mind the bits lost and the Patent itself spelling it out. Maybe in his next video he'll explain how it actually has less aliasing distortion...because he said so and "he and his colleagues heard it".
You can't make that stuff up LOL.
Is there any reason why folks can't just like MQA because of the remastering/aliasing distortion/EQ enhancement, rather than a whole bunch of malarkey? Do the SET guys make as many ridiculous excuses as to why they like the sound?
It really is ok to like something just because, rather than for made up nonsense. Seriously, it's ok.
 

31f7eb44aa3dce02ab79541ef54e677c.jpg
 
IMHO that should also put in perspective the effectiveness of double blind tests.
Totally agree with you, since we both read the entire published paper and saw they had absolutely no problem telling the difference with certain tracks, but that whereas as one track was preferred PCM, the other was preferred MQA.
i.e., the added aliasing distortion/EQ "spicing" of MQA can be better...or worse, depending on original mastering. Yes, blind tests in state of the art controlled listening facilities as shown, with highly trained verified listing abilities testers are indeed the ultimate in effectiveness. Which is why it is the de facto gold standard of science, rather than just casual easy chair all variables uncontrolled sighted "listening". Thanks for concurring. My experience with MQA was exactly the same, hit and miss.

p.s. btw, it was also nice to confirm at a recent private audition with the Florida Orchestra, that they too, like most major orchestras, use blind auditions to eliminate bias when selecting members. How cool is that?
 
AJ Sound field,

I haven't read the entire paper nor do I concur with the conclusions that you've drawn.

Just think about violins double blind tests outcomes...

Nor do I think that MQA is hit and miss.

In my non expert and certainly not scientific approach I almost always prefer the MQA version to the Redbook alternative whilst using Tidal (Masters).

Cheers!

Re your PS: about that I am convinced that that is a way to mitigate possible race, gender etc discrimination.
 
AJ Sound field,

I haven't read the entire paper nor do I concur with the conclusions that you've drawn.
Ah, those might be related. Sorry, I thought you would have wanted to before casting aspersions or drawing conclusions.

Just think about violins double blind tests outcomes...
Exactly, even more proof they work and why they are the gold standard of science, including fields of perception like audio, testing the sound>ears of violins, orchestra members playing violins, etc, etc.

Nor do I think that MQA is hit and miss.
In my non expert and certainly not scientific approach I almost always prefer the MQA version to the Redbook alternative whilst using Tidal Masters.

Cheers!
Yes, of course, that all makes sense from an approach vs results standpoint. Enjoy Tidal (while you still can).

cheers,

AJ
 
Exactly, even more proof they work and why they are the gold standard of science, including fields of perception like audio, testing the sound>ears of violins, orchestra members playing violins, etc, etc.

Regarding double blind tests and measurements, what you call science gold standard, I am far from convinced that they are truly effective and that you can get all the valid conclusions simply by considering the published results.

The McNamara Fallacy comes to mind.

Specifically about the violin results and double blind tests I also like to take into account what some true experts have to say:


https://www.thestrad.com/blind-testing-strads-and-guarneris-misses-a-fundamental-point/6944.article

Cheers!
 
Regarding double blind tests and measurements, what you call science gold standard
That's not my descriptor, it's a fact. There is a difference.

I am far from convinced that they are truly effective and that you can get all the valid conclusions simply by considering the published results.
Then I sure hope you don't take pharmaceutical drugs, use a intelligible cel phone, attend major orchestras, rely on particle physics results, etc, etc, etc.
What is you viable, proven better alternative to the defacto standard used by science?

The McNamara Fallacy comes to mind.
Weak. That has nothing to do with violin/orchestra members tests whatsoever. No statistical data is mined, sorry. My guess is you are totally unfamiliar with musician auditions.

Specifically about the violin results and double blind tests I also like to take into account what some true experts have to say:
https://www.thestrad.com/blind-testing-strads-and-guarneris-misses-a-fundamental-point/6944.article
A true expert in excuses. :)
He presented zero scientific rebuttal in his pure opinion excuses piece, sorry.

cheers,

AJ
 
That's not my descriptor, it's a fact. There is a difference.


Then I sure hope you don't take pharmaceutical drugs, use a intelligible cel phone, attend major orchestras, rely on particle physics results, etc, etc, etc.
What is you viable, proven better alternative to the defacto standard used by science?


Weak. That has nothing to do with violin/orchestra members tests whatsoever. No statistical data is mined, sorry. My guess is you are totally unfamiliar with musician auditions.


A true expert in excuses. :)
He presented zero scientific rebuttal in his pure opinion excuses piece, sorry.

cheers,

AJ

In terms of music reproduction enjoyment / preferences I tend to use my own hearing ability. Alas, I can't use measurement devices or double blind tests outcomes to listen to music.
True experts in HiFi like Nelson Pass understand the limitations of measurements.

And I'm under the impression that major orchestras existence preceded measurements, published papers and double blind tests...

Is it weak? The McNamara Fallacy is not just about data mining. From Wiki:
The McNamara fallacy (also known as quantitative fallacy[1]), named for Robert McNamara, the United States Secretary of Defense from 1961 to 1968, involves making a decision based solely on quantitative observations (or metrics) and ignoring all others. The reason given is often that these other observations cannot be proven.

I'd say being a professional violinist not only makes him an expert it probably allows him not to fall into the McNamara Fallacy or suffer from the Dunning-Kruger effect.

Cheers!
 
In terms of music reproduction enjoyment / preferences I tend to use my own hearing ability. Alas, I can't use measurement devices or double blind tests outcomes to listen to music.
Correct, for mere enjoyment of math, running and especially music etc, one needs only listen and prefer, there is no need to measure (unless one wanted a way better sounding system free of terrible bass peaks, out of phase, reversed channels, etc etc), or blind test test ability there. One can just prefer whatever one prefers and enjoy.
It is only if wanted to actually test those abilities, that measurements and tests would come into play. That's why one doesn't grade ones own math test, why Olympic runs are exactly timed and why all the testers in Bobs MQA test referenced have assessed, verified abilities, just like with math or running. Self assessed, believed abilities are verboten.

True experts in HiFi like Nelson Pass understand the limitations of measurements.
Exactly, that's why Mr Pass has never ever designed an amplifier without measurements. That's why Mr Pass designs based on the foundation of his science education. Since it's audio devices, he knows measurements, science and listening are all essential. Exactly as done in the MQA test.:)
He, like the testers and myself, fully understand the value of all the factors.

And I'm under the impression that major orchestras existence preceded measurements, published papers and double blind tests...
Yes, voodoo and witchcraft preceded modern medicine also. Some are doomed to not learn from history and may still go to such practitioners. Luckily, some know better now, so modern medicine is based on science and double blind tests, as are modern orchestras.http://gap.hks.harvard.edu/orchestrating-impartiality-impact-“blind”-auditions-female-musicians
I think we can all be thankful the misogyny, bigotry, biases, etc. that plagued the past is well addressed today, at least for many. I love seeing more female orchestral players!;)

Is it weak? The McNamara Fallacy is not just about data mining. From Wiki:
The McNamara fallacy (also known as quantitative fallacy[1]), named for Robert McNamara, the United States Secretary of Defense from 1961 to 1968, involves making a decision based solely on quantitative observations (or metrics) and ignoring all others. The reason given is often that these other observations cannot be proven.
Yes, because blind tests don't ignore all factors, they account for them, unlike bias filled self assessed abilities uncontrolled listening. The test methods chosen by Stuart is not based solely on measurements, but by actual trust ears/just listening abilities. Red Herrings for rejecting scientific method is extremely weak.

I'd say being a professional violinist not only makes him an expert
That is classic Appeal to Authority fallacy. Him being a professional violinist doesn't make him immune from biases, If he can prove that he can tell the differences between the sound of violin by listening free of his beliefs, biases, etc that plague all humans, "expert" or not, then he can demonstrate his "expertise". Just like one has to take a math test and have it graded by another party, or run a 100m dash...and have it measured either against or runners..or measured time.
Being a self assessed/declared "expert" is irrelevant here, only demonstrable ability is. He can either do that, or "talk" about it like that article.:)

it probably allows him not to fall into the McNamara Fallacy or suffer from the Dunning-Kruger effect.
Perfect. Your knowledge of subjects is spot on.
https://en.wikipedia.org/wiki/Dunning–Kruger_effect
In the field of psychology, the Dunning–Kruger effect is a cognitive bias in which people of low ability have illusory superiority and mistakenly assess their cognitive ability as greater than it is. The cognitive bias of illusory superiority comes from the metacognitive inability of low-ability people to recognize their lack of ability; without the self-awareness of metacognition, low-ability people cannot objectively evaluate their actual competence or incompetence.
This is exactly why self assessment of ability is really, really bad idea. Thank you!
Yes, to get back to the thread subject, BStuart is acutely aware of this and chose to team with McGill U (a great Higher Ed institution btw) to do non-Dunning-Kruger "self assessment/grade my own math etc" type analysis of MQA. He made a very wise choice! Thanks again for all the info.

cheers,

AJ
 
Back
Top