It’s been a challenge to measure digital audio’s qualities and most of the time audiophiles don’t know any measurement outside RMAA’s analog metrics and got failed evaluations as you can see below:
This was done through pure software environment using VB-Audio Virtual Cable to make sure no hardware’s error is involved. After lengthy research in pro audio’s communities, I found DiffMaker being used in this thread below.
DiffMaker was used to test for audible effects of
Changing interconnect cables (compensation for cable capacitance may be required)
Different types of basic components (resistors, capacitors, inductors)
Special power cords
Changing loudspeaker cables (cable inductance may need to be matched or compensated)
Treatments to audio CDs (pens, demagnetizers, lathes, dampers, coatings…)
Vibration control devices
EMI control devices
Paints and lacquers used on cables, etc.
Premium audio connectors
Devices said to modify electrons or their travel, such as certain treated “clocks”
Different kinds of operational amplifiers, transistors, or vacuum tubes
Different kinds of CD players
Changing between power amplifiers
General audio “tweaks” said to affect audio signals (rather than to affect the listener directly)
Anything else where the ability to change an audio signal is questioned
There’s interesting metric called ‘Correlated Null Depth’ that can detect most subtle changes as measurable data. Archimago refers to this metric as below if you’re following his measurement tests.
The higher this value, the more correlated the 2 samples are (ie. the “closer” they sound).
Now I hope you understand better about DiffMaker and correlation depth. Let’s proceed to the methodology part. After a few runs of Diffmaker’s tests for a few weeks, this was the method I used in final version.
- Setup master file and audio playback/recording through digital domain. In this case, I’ll use VB-Audio Virtual Cable, foobar2000, and Audacity on Windows 10.
- Prepare aligned master files with silence added. For basic demonstration, I’ll make 5 samples of aligned/before/after wav files with Audacity at 24/96 format (10ms latency).
- Route bit-perfect recording from Virtual Cable’s master audio stream with Foobar2000’s WASAPI output to Audacity’s WASAPI input, export audio as before.wav
- Use free version of Fidelizer at Purist user level with updated foobar2000 configuration from Fidelizer’s User Guide, record again, export audio as after.wav
- Compare results using Audio DiffMaker with master file as reference.
Testing machine ran on AMD FX8350 with 8 cores 4.2GHz and 8MB cache for L2/L3. I also used high quality motherboard with 16GB RAM and Platinum grade PSU. Here’s the result from my experiment.
parameters: 0sec, 0.000dB (L), 0.000dB ®…Corr Depth: 300.0 dB (L), 300.0 dB ®
This is ideal result of exact comparison with 300.0 dB of correlation depth
parameters: -3.5sec, 0.000dB (L), 0.000dB ®…Corr Depth: 175.6 dB (L), 174.0 dB ®
parameters: -4.5sec, 0.000dB (L), 0.000dB ®…Corr Depth: 168.5 dB (L), 168.6 dB ®
parameters: -5.5sec, 0.000dB (L), 0.000dB ®…Corr Depth: 167.4 dB (L), 167.5 dB ®
parameters: -6.5sec, 0.000dB (L), 0.000dB ®…Corr Depth: 166.3 dB (L), 167.0 dB ®
parameters: -7.5sec, 0.000dB (L), 0.000dB ®…Corr Depth: 172.5 dB (L), 176.1 dB ®
Average: 0.000dB (0.000-0.000)…Corr Depth: 170.35 dB (166.3-176.1)
Median: 0.000dB…Corr Depth: 168.55 dB
Dropped to nearly 50% of perfect data but still above 150 dB. With 9.8 dB swing range, it’s safe to assume about 5% threshold for evaluation.
parameters: -1.581sec, 0.001dB (L), 0.001dB ®…Corr Depth: 90.6 dB (L), 91.5 dB ®
parameters: -1.184sec, 0.001dB (L), 0.001dB ®…Corr Depth: 87.2 dB (L), 87.3 dB ®
parameters: -1.018sec, 0.001dB (L), 0.001dB ®…Corr Depth: 88.1 dB (L), 88.1 dB ®
parameters: -946.4msec, 0.001dB (L), 0.001dB ®…Corr Depth: 88.3 dB (L), 86.3 dB ®
parameters: -686.3msec, 0.001dB (L), 0.001dB ®…Corr Depth: 90.2 dB (L), 87.6 dB ®
Average: 0.001dB (0.001-0.001)…Corr Depth: 88.52 dB (86.3-91.5)
Median: 0.001dB…Corr Depth: 88.1 dB
Real world result arrived with quite narrowed range. It’s only 5.2 dB between min/max of correlation depth. At least it’s more reliable than aligned result.
parameters: -563.4msec, 0.001dB (L), 0.001dB ®…Corr Depth: 104.0 dB (L), 95.9 dB ®
parameters: -1.025sec, 0.001dB (L), 0.001dB ®…Corr Depth: 93.5 dB (L), 94.0 dB ®
parameters: -1.286sec, 0.001dB (L), 0.001dB ®…Corr Depth: 87.2 dB (L), 87.3 dB ®
parameters: -1.025sec, 0.001dB (L), 0.001dB ®…Corr Depth: 88.1 dB (L), 88.2 dB ®
parameters: -856.4msec, 0.001dB (L), 0.001dB ®…Corr Depth: 90.4 dB (L), 87.6 dB ®
Average: 0.001dB (0.001-0.001)…Corr Depth: 91.62 dB (87.2-104.0)
Median: 0.001dB…Corr Depth: 89.3 dB
It started great with over 100 dB but the rest seems to wear down over time a bit because I also opened Chrome to chat in Facebook while during the experiment for daily usage tests. Strict tests for high quality result may lead to faking data abuse from people who can’t do a proper job.
With Fidelizer’s optimizations, we detected 3.1 dB increment of average and 12.5 db increment of maximum correlation depth with general improvements on other metrics too. I shall conclude that there’s measurable improvement with bit-perfect playback in digital audio.
You can also try running performing this test on your own and adjust DiffMaker configuration to show different kinds of data without rounding error or with other standards. Have fun measuring audio software optimizations with DiffMaker!