The TubeTrap’s Role in HiFi pt. 2
This week we continue reading what may be the most definitive story of the evolution of small-room acoustic conditioning ever written. The full paper explains the history of the TubeTrap, from its origins and refinement, through its sudden emergence and domination of the HiFi acoustics market. Then you learn exactly how a TubeTrap imparts such a profound improvement upon your listening experience, along with the most appropriate metrics to quantify this improvement. Further explanation follows of the parameters that make an audio room excellent, concluding with advanced room construction techniques.
If an audiophile was stranded on a desert island with a 16’x26’x10′ shack and a pair of loudspeakers, this is the one book they should bring. This issue we’ll jump right into the heart of the paper, exploring the benefits of TubeTraps and starting to explain how they happen.
Read the entire paper
Room Mode Control
The next idea I had was that they trapped room modes. It was well known in room acoustic circles that all room modes always have pressure zones in the corners of the room. Adding bass traps to the corners of the room certainly should add damping to all the room modes, which should reduce the sharpness of the modes, reduce their phase add and cancel effects and produce a smoother frequency response curve.
Acoustic testing of modes did show these changes: Room mode resonance, the Q or sharpness of the room mode resonances were reduced from 30 and 40 to about 10, which sounded good enough. And yes, the frequency response curves showed changes, some smoothing in the bass range but only to a very small degree, on the order of ½ to 1 dB, certainly no more. What a disappointment. This small amount of improvement is below the threshold of perception of sound level difference in a test lab setting, 1 dB. Certainly no one could hear this tiny acoustic EQ adjustment that came from adding corner bass traps. I published these results in another AES paper.
This couldn’t be the reason for the glowing reviews, customer acceptance and appreciation. The search for what TubeTraps were doing right when placed in the front corners of the listening room continued on. Such small changes in the frequency response of the room simply did not match the thrill and wonderment the audiophiles continued to regularly report after installing TubeTraps in their rooms and they were always the same: Tighter, punchier bass, losing the one note bass, getting deeper extension to lower bass and achieving something called musical bass. This is what one might expect from a good bass trap. However, the observations went on, and were not limited to the improvement of bass response. Imaging, musicality, stage detailing were all clearly identified over and over as being the unexpected but very significant improvement in the audio performance due to adding TubeTraps into the room. How could adding bass traps to a room make significant improvements in the treble range performance of the room? Until I could sensibly answer that question, I couldn’t know what TubeTraps were doing right in listening rooms.
Testing, Testing and More Testing
To make tests in the lab and also in customer rooms that tracked the before and after “treatment” we were using the Crown Techron, a portable FFT analyzer that produced wonderful ETC Energy, time curve water fall displays, Reverb decay curves and frequency room response curves to just name a few. The problem we kept having was that we were losing resolution when we worked down in the bass range, which was where we wanted to work.
We could focus in on how the sound level changed over time but couldn’t pinpoint what frequency was involved. Alternatively, we could pinpoint the frequency we wanted to look at but we lost all observable detail in tracking how the sound level varied over time.
It turns out I got caught by the classic “uncertainty principle” dT x dF = 1. For example if we wanted to look at 30 Hz, we can’t just look at a single frequency we have to box it in, say, look at it within a 2 Hz bandwidth, so we actually look at data between 29 and 31 Hz in order to see what 30 Hz was doing in the room. Here’s the problem: dF = 2 Hz. This mean: dT x 2 Hz = 1 or dT = ½ second, the fluctuations in time were being averaged over one half second. Well, that was way too time averaged to do us any good. We needed to see changes on the order of 1/10 second at least or faster but then our dF was 10 Hz and the frequency we were looking at was a 10 Hz bandwidth, not a frequency.

So I decided to go to a Syn-Aud-Con meeting. Everyone who went to this high end sound system group had a Techron and were using it regularly. I went to find out what the heck I was doing wrong and how to get good data results in the low frequency range. Ultimately, the uncertainty principle won and I gave up using the FFT analyzer to figure out what was going on in the low frequency range.
Let’s backup to the beginning again. When the TubeTrap was invented I began to test it. We got the local university to loan us their concert hall reverb chamber. It wasn’t being used because they changed from acoustic to electronic reverb reinforcement. We lived in that concrete room for 5 years. I had one tech there almost constantly. We were testing the sound absorption of every model of TubeTrap at every frequency from 25 Hz through about 700 Hz. That’s about 700 data points per test run. In a 10 second reverb chamber it takes about 10 seconds to charge the room with sound and 10 seconds for it to discharge which ended up being 30 seconds to do get one test point. That amounts to 350 minutes, or 6 hours per test run. At first it was thrilling but after a solid year it was getting tedious.
So we experimented with speeding the test up. We took known traps and did the test faster, comparing the results with the known result for the product. We managed to speed the test up to 1/8 second tone burst test cycle, that’s 8 tone bursts per second and each tone burst was a different frequency. This wasn’t FFT testing it was just a tone generator, sound meter and strip chart recorder. This direct testing system did not have any df x dT = 1 problems that we knew of and we always got great pure tone absorption data. If we ran the test any faster than that, things got blurry and we lost our ability to resolve details. By then our testing had become automatic, and each 700 point data run now only took about 1 ½ minutes. We had nailed high speed bass trap testing.
So, I’m heading to Syn-Aud-Con with my dT x dF = 1 problem in mind and finally am sitting in the class with quite a few people who would become audio industry leaders in the future. Instead of what I wanted to figure out, a European scientist, Victor Peutz of Netherlands, walks into the room. He has come over to the US to introduce us to “intelligibility” measurements and what they mean.
Sound contractors have long been required to meet sound level standards, where every seat in the house had to have roughly the equivalent sound level. Then a new spec showed up for sound contractors to meet. Every seat in the house had to have the same spectral balance. After that they were saddled with another specification to be met, the house curve. This is an EQ’d sound spectrum which also had to be delivered within a couple dB to every seat in the house. Everybody in the audience was guaranteed to be exposed to the same sound level and same EQ. The next house spec, thanks to our European counterparts, was going to be the STI, Speech Transmission Index and it measured speech intelligibility. This was the class where we were going to be trained to understand this test and how to perform the measurements.
The test they used was not initially an FFT test run on the Techron. A protocol was later developed to do it. But for now this was a very different test, something called an MTF, a Modulation Transfer Function and B&K made the RASTI testing system, RApid Speech Transmission Index. MTF is sort of like Morse Code, da da daaa da da… And the question is how fast can you send the code before it loses the clarity and becomes a garbled blur of sound instead of a set of discrete tone bursts…. And guess what the RASTI tone burst rate was? Right, 8 bursts per second.
By the way, photography engineers learn a lot about MTF. In photography it is about the silver crystal size. If we have lots of light, we can have slow film, lots of small silver crystal which produce sharp differences in brightness, and the ability to see very fine lines. If we don’t have much light we use fast film which has large crystals which have the effect of producing gradual differences in brightness. MTF measures how strongly the light changes from bright to dark, depending on the closeness of line spacing. This is similar to MTF in sound, where we measure the difference in loudness of a gated sound, measuring the sound level change between when the sound is on and off and on again.
Notice, 2 eyes, and 2 ears. Same kind of system: Stereoscopic is 3d visual imaging and stereophonic is 3d auditory imaging. Notice, the eye event reaction time is 1/30 second which lets fast slide shows become moving pictures, and notice that the ear event reaction time is the same, only here all the reflections inside of that time become one, and outside of that time become separate echoes.

Even more intriguing to me is that photography is a recording of optical spectrums which varies depending on where in space you happen to be looking. All while audio is the recording of sonic spectrums which varies depending where in time you happen to be listening. Space-time interchanging And finally notice, photography engineers and their gear manufacturers are all about controlling the level of visible detail, the photographic MTF for their customers, while audio engineers and their gear manufacturers see nothing, know nothing and say nothing about audio MTF for their customers. In fact, they never even heard of it.


