Audio Science Review Discussion

First, thank you for commenting on my review. Bad or good, I rather see a response from the company than not.

As to your comment, we have been measuring 0 to 60 times of cars for decades, doesn’t mean they are obsolete because of that. There is a reason noise and distortion is still the top metric in audio gear: neither is wanted or desired in high-fidelity equipment.

You speak of brain and psychoacoustics. The latter is part of my professional career and in many reviews, I apply that to what is measured to determine level of audibility. You can read more about me here: A bit about your host.... | Audio Science Review (ASR) Forum

May I ask what is your professional qualifications are in this regard? Reading your posts, you seem to be talking about folklore like some distortion at these very low levels is desired. There is not one piece of research or controlled testing that backs this. Sure, back in tube days with very high distortions, some people opined that this may be the case but even that was never verified in controlled testing.

Ultimately the story you tell is the same story we hear from every expensive audio gear manufacturer that measurements show to be poor performing. Why should anyone believe your story instead of theirs? How can you prove your claims? Are you asking that we just accept those claims because you said it???

You see, that is the beauty of measurements. You can touch and feel them. You can verify them. They are reliable facts. Not assumptions, “everybody knows,” self-serving opinions, etc.

But let’s say you are right. What possible value comes out of all the noise and interference show in tests that didn’t exist 70 years ago but are clearly visible in your product?

The test tone is the tall one in the middle. Your product is generating a ton of unwanted, non-correlated noise and jitter to its left and right. There is also a bunch of power supply spikes hiding 24 bit signals all the way to the left. None of that is “musical” and immediately damns the engineering of the product. Look at the response from a competitor of yours at 1/6th the price:

The streamer runs Android and is gorgeous to look at as well.

And it is not just me that found this. Stereophile measurements demonstrated similar problems back in September of last year:

Look at all that garbage there. JA is not nearly as direct as I am but even he had to say that this performance was poor:

"The Mytek’s rejection of word-clock jitter with 16-bit J-Test data (fig.8) was puzzling. Although all the odd-order harmonics of the LSB-level, low-frequency squarewave are at the correct levels, indicated by the sloping green line in this graph, some power supply–related sidebands are present on either side of the high-level tone at one-quarter the same rate, as is a higher-level pair of sidebands at ±1380Hz. "

How is power supply noise desired in audio gear? Or those sidebands at 1380 Hz?

I could go on but you have not one, but two measurements agreeing with each other than less than good engineering has been applied to design of Bridge II. My hope is that you go back and look at the cause of these issues and work on a new revision that deals with them. Your customers deserve that after paying some $5,000 for your box.

37 Likes

That would still make it without value. Your short-term “echoic” memory only lasts few seconds, not hours. And as differences get smaller, even a fraction of a second is too long to remember minute differences. But even if you had used short-term AB testing, unless you match levels and perform the test blind and repeated, the results are without value.

I implore you to do the test again but properly. You can buy DACs with much better performance than Mytek for as low as $100. Be sure to match both levels and do the test blind. Repeat 10 times and see if you can tell them apart at least 8 times. This is how to do the test properly and without bias.

Without this any of us can come up with your outcome, or the reverse. I know, because I have done it.

23 Likes

You didn’t read my responses in that thread did you? Because if you had, you would learned that I listen to a ton of gear in my reviews. Every speaker, headphone and headphone amp gets a listening test. That is probably 50 to 70 products a year if not more. The BB2 would have gotten that listening test as well had it not been so poor performing and having a filtering bug that they have known about since stereophile review last year. So I aborted the review at that point.

In the same stereophile review, the subjective reviewer found noise issues with Wifi:https://www.stereophile.com/content/mytek-digital-brooklyn-bridge-ii-roon-core-preamplifier

“Downstairs, with my desktop system, I noticed some low-level noise and hash, the kind that can sometimes leak through a computer soundcard, and also some hum. The hash was not audible from the balanced or headphone outputs—only the unbalanced. At normal listening levels, with no music playing, the hash was audible but low in level.”

That kind of interference is precisely what my measurements showed over and over again. The device has simply not enjoyed proper isolation of sensitive audio section from digital subsystem and the power supply. These things are easily measured but seemingly that was not done, or if done, not fixed.

21 Likes

First, thank you for commenting on my review. Bad or good, I rather see a response from the >company than not.

Sure. I’m talking to you while others chose to rightfully ignore you. This is not because you have done measurements, that’s great, but because how you later interpret them (or actually DONT interpret) and then handle them on your bizarrely constructed forum. The whole thing of you being THE authority to “approve and disapprove” a given piece of equipment , then the whole voting system where you unleash the mob on a product- this is the real problem. Who in the right mind would want to take part in this?

As you have challenged the very accomplished designers of audio in this very cavalier judgmental way, no wonder you did not hear back from them.

If you measured something , reached out to designer, showed them potential problems, verified that these are indeed problems (like this 2nd harmonic you got wrong with Mytek), this would be both professional and courteous and maybe create an opening for something valuable.

But the current style, NO, too divisive, you’ll end up with a bunch of hardcore followers that cheer you and everybody else who’ll just ignore ASR. Most audio companies own Audio Precision, they can measure don’t need your AP.

As to your comment, we have been measuring 0 to 60 times of cars for decades, doesn’t mean they are obsolete because of that.

People buy cars after a test drive. That’s usually the first thing.

Did you listen to BB2 on good speakers? A/Bed it with Eversolo? How was the soundstage on either?

There is a reason noise and distortion is still the top metric in audio gear: neither is wanted or desired in high-fidelity equipment.

This is this mid 20th century approach for a generic mass produced gear. We now know a lot more beyond this and you are still stuck in mid 20th century…

Noise, yes, I agree , generally not desirable.

But the crux of the matter is that distortion are desirable if they are the right one. And the time/phase alignment of these distortion. Why don’t you dig deepr on this subject.

Like maybe that a Stradivarius sounds so good and costs $1 milion because it produces right harmonics?. Or you would rather have a “clean” sound of strings without a wooden box? Sure that virtuoso violin player could have gotten a chinese violin on alibaba for $100.

You can hear this with the BB2 with the HAT ™ function. When you engage HAT which boosts 2nd harmonic by 12dB you don’t hear it as an unpleasant distortion but as a touch of pleasant groove. This is the whole point that you don’t seem to understand that in eyes of many people outright destroys your theory that SINAD nr directly correlates with “sound quality”

Am I right that you are saying this? Better SINAD= Better sound? If you say YES, you should loose the right to judge the equipment sound quality right here because this is a CARDINAL ERROR. The whole logic of good vs bad at ASR is built on this single fact.

So if it is a YES, then I have 2nd Q following on my studio tape example. Q: If SINAD is the metric, what does sound better: a 1/2" 30IPS studio tape (SNR of 60dB) or the $9 Apple dongle (SNR 96dB). It would be the Apple dongle , right?

So, since the sound from these tapes is 90% of the sound on hi-res Qobuz (not the digital sinewaves from you AP), how do you now comment about these 60dB, the tape distortion and playback of all this? What matters and what doesn’t , please elaborate…

You speak of brain and psychoacoustics. The latter is part of my professional career and in many reviews, I apply that to what is measured to determine level of audibility. You can read more about me here: A bit about your host… | Audio Science Review (ASR) Forum

Level of audibility is related to the background noise character - in other words it’s the background sounds that determine how well you can hear the main sound. And the sound itself too - makes sense right? This is discussed somewhere on ASR where some figures seemed to be arbitrarily posted numbers of -120dB. This is not a scientific approach. Threshold of audibility is not constant.

May I ask what is your professional qualifications are in this regard?

I though I mentioned this in the first post: I have studied electronics and acoustics, worked for many year in largest NY City recording studios and ran Mytek for 30 years where we are known for best professional converters and later excellent award winning hifi. I designed my first mastering 18 bit ADC in NYC in 1991. My career follows the complete arc of digital audio history: from the experience of early 1630 Sony recorders at the Hit Factory in 1989, through the first 20bit and the 9624 converters of the 90s, DSD in 2000+ (Mytek made a master DSD recorder for Sony) and then MQA in 2015 to current hi-res digital today (768k/32bit and DSD512).

There were very similar discussions in early 2000s on DSD: how bad DSD measures, it has all that noise etc. Same claims of SINAD crap as here. Yet 20 years later DSD is still considered the best sounding digital (btw because it does not have digital filters).

Reading your posts, you seem to be talking about folklore like some distortion at these very low levels is desired. There is not one piece of research or controlled testing that backs this.

? There is plenty of AES papers on the subject. And the “folklore” is called experience.
Me? A lot of records made at studios, a lot of music heard, 100+ pieces of equipment designed. Humans use ears to hear and perceive sound, not Audio Precision. But I do the routine design/listen/measure/correct/listen/measure. Most designers do this but not all.

If you still have the BB2, plug it in and turn the HAT on and off when playing music. Tell me HOW do you hear/perceive these extra 12dB of 2nd harmonic.

Sure, back in tube days with very high distortions, some people opined that this may be the case but even that was never verified in controlled testing.

You see Amir… You make absolute statements like this that just show that there is this whole audio word you are just not part of or you maybe willfully ignore.

It’s not like we were all lost in a dessert and now you showed up and enlighten us with this incredible SINAD. If you try to be “scientific” yet ignore and don’t investigate counterclaims, you’ll end up being ignored while we all move on.

Ultimately the story you tell is the same story we hear from every expensive audio gear manufacturer that measurements show to be poor performing. Why should anyone believe your story instead of theirs? How can you prove your claims? Are you asking that we just accept those claims because you said it???

Yes, you hear the same thing from all of us because of your simplistic approach followed by real arrogance of your forum presentation system. Your forum architecture and policies is the real problem. News of my post here already made it to the trolls on your forum. Who’d want to be a part of this?

You see, that is the beauty of measurements. You can touch and feel them. You can verify them. They are reliable facts. Not assumptions, “everybody knows,” self-serving opinions, etc.

There is a whole studio engineering/ audiophile vocabulary. Like, a soundstage, tight bass, smooth top, clinical sound, lifeless sound, groovy, musical etc.

You may dismiss these as subjective, but unfortunately this is how the records are made! Recording Studio stuff doesn’t talk Sinad. They talk music.

Now a real challenge is to correlate the two: for example does the -95dB of second harmonic makes DAC groovy? Yes! Is this good? 9 of 10 people would agree. Is the linear power supply responsible for tighter bass? Yes! Most audiophiles would agree.

This is not some mystical, “subjective” , “unscientific” but a clear daily vocabulary of recording professionals and audiophiles that clearly describes the main sound character of a give piece of equipment. We use it every day and perfectly , precisely understand the meaning.

We use one microphone for vocals and another one for piano. Why? because they sound different Recording is art aided by science, not the other way around.

Your measurements, do not. They are a tool that’s useful in design but it’s just a simple tool. Using it for judging the sound quality exclusively is like peeping into Carnegie Hall through a key hole.

But for some people it can may take years to understand the complexity of sound.

But let’s say you are right. What possible value comes out of all the noise and interference show in tests that didn’t exist 70 years ago but are clearly visible in your product?

It may be visible- but is it audible? And how?

If you were now taken the next step in correlating the measurements with what you heard and said for example: “I believe that the presence of this noise here results in me hearing this or hearing less of that” I would then have the whole new level of interest in your work.

But when you routinely continue to do what you do now, it’s simply boring, divisive, and I’m afraid harmful to starters in hifi.

I could now say that it is maybe the ASR who misleads people into buying the wrong equipment, while they could have otherwise maybe really enjoyed a nice tube amp or a turntable.

After all we listen to music for feelings of pleasure which are “subjective” .

It’s like food, some people like fat and some people like lean.

Sincerely, Michal at Mytek

13 Likes

What? 90% of music on Qobuz is from tapes? Where on earth did you get that? It is not remotely true. I would be shocked if 1% is from tape.

FYI I have a tape deck:

I very much enjoy old masters of rock/jazz music compared to horrid digital versions of the same. But the format itself will lose to that $9 dongle. Tape hiss is most definitely there. And so is distortion although a lot of it gets masked. If the mastering was not better, there would be zero fidelity advantage to it. They are however wonderful to look at when playing music. :slight_smile:

8 Likes

I am sorry but you are answering a question I did not ask. You claimed I don’t know about the “brain and psychoacoustics.” I asked you what you know about it. Nothing in the design of electronics or converter teaches you these topics. Nor acoustics (which is another one of my expertise).

I am electrical engineer and I know nothing in my degree program taught me that. And that there are countless design engineers who don’t know anything about psychoacoustics much less science brain which is medical field.

Despite the above, I routinely hear audio designers saying “the way the brain works…” You can’t make any such statements without deep study and knowledge of the field. It is a disservice to real experts in this field.

Why don’t you quote those papers then? And what experience are you speaking of? You have performed double blind controlled tests, injecting the amount of harmonic distortion your product produces, and demonstrated preference? Or are we back to you having designed DACs and all of a sudden become an expert in how the brain works???

Where are those measurements? Why are they not on your website? Why have you not post them as a response to my measurements if they are better? How come stereophile measurements showed very similar issues? Did you not supply the product to them direct so had involvement in their testing?

If you want to compare records, I have tested near 500 DACs now. You can read my AES paper on learnings from that: AES E-Library
" Comprehensive Objective Analysis of Digital to Analog Conversion in Consumer and Professional Applications

Document Thumbnail

“A six-year project measuring performance of over 400 Digital to Analog Converters (DACs) brings insight into the range of performances and prices in this device category. Measurements show that full audible transparency has been achieved in many DACs at reasonable costs, based on off-the-shelf IC DACs. Consumer DACs lead in this regard over those intended for professional applications. Custom/discrete DACs have not been seen to bring an advantage in either performance or cost. Suggestions are made to improve functionality and performance of measurement equipment and standardization of output voltages in DACs.”

Please read the paper and see examples of both good and bad performance and how there is zero correlation between price and performance. Designers seem to have forgotten to do what you say: measure and improve design. Instead, they tell stories to customers. Just like you are doing here with what you used to do. None of that matters. What matters is how the product works to achieve high fidelity. You know, faithfulness to the source.

If your goal is something different than above, then you better a) say it front and center and b) provide controlled testing that shows reliable preference. Without this, you are talking marketing statements, not a counter to my technical review.

18 Likes

Your tape machine is a 1/4" home deck.

What? 90% of music on Qobuz is from tapes? Where on earth did you get that? It is not remotely true. I would be shocked if 1% is from tape.

Up until 1995-1999 depending on the studio the 1/2" 30 IPS tape was the standard mixdown master. Multitracks were 24 track 1" 30IPS sometimes tow of them.

Pretty much all classic rock and jazz is from analog masters or analog multitracks (some were remixed to digital)

Dire Straits Brothers in Arms was a Sony 3324 for multitrack, than analog SSL/Neve mixdown to 1/2 analog tape. In the years of 1993-98 we usually mixed down analog to 1/2 tape and in parallel to DAT machines and then higher res Tascam. FWIW these analog masters are more valuable today than the digital ones.

There was very little of purely digital recordings until about 2000s. That was after most available catalog on Qobuz was made.

M

8 Likes

3 Likes

Discussions like this are why the Roon forum is so valuable to some members like myself.

Amir has come from a forum owned and run by himself to a neutral ground where he’s not in control; Michal has come to the same neutral territory to defend against a review that has a negative impact on his business interests.

Michal could improve his case by posting clear references to back up his claims. One screenshot of a newspaper article from a couple decades ago just isn’t sufficient. Some of his arguments just contain a lot of words, and are ethos arguments devoid of verifiable evidence.

Amir could stand to be more objective in his representation of the information in his reviews on ASR. I looked at several reviews on ASR, and they do contain lots of useful information and measurements, but each picture also has a description below it. Many of these descriptions contain the positive or negative feelings and opinions Amir has towards the equipment, which injects a bias in the reader that influences their perception of the measurements. It would be more in line with the scientific method if he didn’t bias the reader, and let them determine conclusions themselves, IMO.

Regardless, I appreciate Michal and Amir for having a discussion here!

12 Likes

I am puzzled by this comment. People look to me to not only make measurements but also explain what they mean. Even in published paper, such as the one I did for AES, there is explanation of the measurements, commenting on what is good or bad in said measurement. If the comments I make are in error, there will be a hundred members with pitch forks ready to go after me. :slight_smile: And indeed they do so. The level of scrutiny of my reviews is incredibly high given the knowledge level of many members and technical members from many companies. The format of the reviews in the context of a forum thread enables and encourages such commentary, something you rarely find in other review formats.

In addition, I have done extensive tutorials on the measurements as to educate readers on what they mean. On DACs, here is the video on my channel:

To be fair, when I started I did not even write a conclusion about the measurements. I just showed the measurements with just a few words and let that be. But many people asked for my final judgement as someone who a) has performed so many measurements (2000 audio products so far) and b) has engineering knowledge of their design and capability to give a final recommendation. I do that but even there, I leave a poll for membership to vote with their finger otherwise.

I don’t know of any other way to do what I am doing to satisfy millions of people per month who come to read the reviews. The format is working quite well but of course like everything else, is not perfect. There is one of me and a mountain of products to review. There is no editor. No helper. Nothing. I measure, highlight what is important and sum things up at the end.

25 Likes

Here’s an example of one:

Just on the top of this post over 400 people have voted on how they think this device is as a streamer, but do these people own or have experience with the device? Likely only a few do, it does however influence a visitor on the site reading this review. Also there’s a few comments casting this device in a negative light in the first few sentences. Does the sharpness of a heat sink actually matter when I’ve come to look at measurements?

As a visitor to your site looking at this review, I’ve determined you have a negative opinion of the device and that I should as well, and I haven’t even seen a single measurement yet. I find this to be non-objective, as I have been biased against this device.

That would be the most unbiased representation of a piece of equipment, you haven’t told the reader what they’re supposed to think, just the facts.

It’s clear from the activity on your site that what you’re doing is working for you, I’m simply stating my opinion after reading the back-and-forth between you and Michal.

2 Likes

We all can have a personal opinion on a subject, item etc. without bringing science into things, because I’m no scientist, I like reading reviews from many sources to get a different angle.

Some reviews will bias towards the negative side of the scales. Another will bias towards the positive side of the scales.

A good reviewer, in my opinion, will always state that something is good or bad in their own opinion. This is to say that others may differ in their views.

Our individual views are good and that’s why we all have different setups of hifi.

But this thread has seriously gone way off topic.

I think a cleanup on aisle 5 is needed. @moderators

:v::dove::white_flag:

1 Like

No and that is the beauty of the type of reviews I do! Everyone reading the review has the same data as I. Therefore they can evaluate it and then include their preferences for price, looks, functionality, etc., giving another reader another aspect of the review.

What? You are holding me accountable for someone just looking at a poll and not reading the review I wrote? If they have no use for the review then that is on them, not me.

As to this being “objective,” it is not meant to be. It is a subjective poll/evaluation just as my conclusions are. The data in the review is the objective part and that doesn’t change no matter what the poll says.

No, I am doing what works great for the audiophile community. They used to complain to get me to change my recommendation. I gave them a tool to express that en mass. They get a voice and readers get other data points. Nothing is taken away in the review.

Sure, someone just looking at a poll that is dominated by great or poor may be inclined to go with the crowd. I see little wrong with that as the community consensus tends to track objective facts in the review. But again, it is on them to read as little or as much of the review.

Interpreting measurements and applying psychoacoustics to them is my job as a reviewer. You are asking me to keep the readers in the dark and not explain things as I see them. It goes completely counter to how you educate people and expand the knowledgebase of the audiophile society.

If my statements are improperly biased, as I explained, i will immediately get called on them. So you better believe what I state is the reality of what the measurements mean. And in that regard, they are a great help to many audiophiles who are not familiar or fully understand the scope of all the tests.

Really, this is a remarkable comment you are making. That explaining things is a bad thing and does the thinking for the reader.

11 Likes

Seriously, i don‘t get all this whining and bickering about ASR.

ASR provides measurements of audio devices, explains them and puts them into context with other/comparable devices. It lets it‘s users vote and comment on tests and reviews.

To me, this makes it pretty unique and also highly valuable in an industry, in which reviews are usually based on „hearsay“ for ages … I highly appreciate that.

If people can‘t stand an objective approach to audio gear, just don‘t visit ASR. I, for one, don‘t visit certain audio sites as well, because they would make me cringe …

If you‘re a producer of audio gear, you must be able to deal with negative reviews of your product anyway.

Nothing wrong with that, move on …

23 Likes

Thread had already been placed in slow mode and that appears to have had the desired effect for now.
Whilst the discussion remains civil and informative the thread will remain open and yes some portions are drifting off topic and that will be dealt with forthwith,
Thank you for your cooperation.

2 Likes

If someone expresses their views in a forum, I can’t see how that makes them any kind of authority. If people take such views to be the ultimate truth, why would that reflect poorly on the author? And finally, I find it disingenuous to compare an invitation to comment to unleashing a mob.

6 Likes

My thread can be shut down as far as I’m concerned. It’s run it’s course as the BBll is now shipping.

1 Like

My vote - dragging the love/hate ASR thing that infects so many audio forums into this one is not a net positive for the Roon community. If you want to fight with the guy that runs ASR, there’s plenty of other places to do that.

I think the 0-60 time for cars is a good way to think about this, actually. Test data published in car magazines tells you very little about how a car feels when you drive it. And different people have different opinions about what they value in the car they drive. IMHO the trouble with ASR is that they give the strong impression that the tests they conduct give a complete picture what a piece of audio equipment will be like to live with and play music through. I myself much prefer reviewers who live with the equipment for some time, listen to a variety of pieces of music that they explicitly name, and compare directly with other equipment they have on hand at the same time.

10 Likes

I think analogies between audio and video/cars/food etc. are inherently flawed. If anything, comparing say THD to acceleration is comparing apples to eggs.

3 Likes

I’m pretty conflicted on ASR. There definitely is some fanboy group-think there, however unintended. My main issue is the lack of perspective. Many of the “problems” uncovered in Amir’s testing are actually inaudible. I understand from an engineering perspective we would all like every circuit and component to measure as close to theoretical perfection as possible. Given that isn’t possible, and that there are always cost/benefit analyses happening at the design level, it becomes critical to point out what actually matters vs what is not ideal from a theoretical standpoint.

One example I’m familiar with is ASR’s review of the Onkyo TX-RZ50 AVR, which I owned and enjoyed for 2 years. In his review Amir tests DAC performance on the HDMI input, saying, “General performance is the same but now we have symmetrical spikes around our main tone which indicates low frequency jitter component. This is unfortunate.” Then about 2 pages later he says, “Oddly when I ran my specific jitter test, it was S/PDIF which fell behind HDMI. Objectively neither performance is proper but from audibility point of view, spikes are low enough as to not matter.” So inaudible all along, but still merits an “unfortunate” comment? Then he tosses in the line, “Linearity was ‘good enough’ for the class.” It’s the dismissive comments that imply the product is somehow substandard.

But the real fun comes later in the interview when he lab tests the amplifier section by driving a pure 4-ohm load and manages to trip the amplifier protection circuitry. He then says, “This is a showstopper bug as far as I am concerned.” I actively participated in the TX-RZ50 AVS forum for two years during this period, and I can’t even count how many times people posted saying they were considering the Onkyo, but had heard about this terrible bug. Every single time we all had to reassure them it hadn’t ever been reported IRL–not even once in more than 400 pages of posts in the thread.

So did Amir have something wrong in his review sample? Since he doesn’t contact the manufacturer for comment, there’s no way of knowing. Was his testing representative of real-world use? Did people with difficult to drive speakers not rely on the internal amp section of a $1300 mass-market AVR?

So it’s the perspective that I think is often missing at ASR, and blanket pronouncements on suitability don’t help. IMO, of course.

13 Likes