Originally published in Oregon Film & Video Magazine, December 2006
One of the many aspects of film production involves visual/sonic continuity. The eyes can easily see the environment surrounding a voice, and the ear-brain can sonically recognize the environment of a voice, so the audience expects to hear vocal tracks that sound like they were made in the scene of the movie. When that doesn’t happen, the magic spell of film is broken.
ADR, dubs and VOs are usually done in the acoustically dry, reflection-free environment of the typical recording studio. These tracks aren’t usable. They don’t sound like the on-screen environment where the action is taking place. They require post-processing to dial in any sense of believability.
To bring a dry voice to life, the effects unit is set up as follows: The direct signal is followed by a lower-level, diffuse reverb with no pre-delay for the upper treble, and crossed over at about 500 Hz. Nothing is added in for the lower treble or bass range. The problem with this synthesized ambience, however, is that it is made entirely out of cloned sound and the ear-brain doesn’t like it.
Natural, acoustic sound always sounds better than synthesized sound. Instead of working in a dry, lifeless reflection-free studio environment, you can set up a lifelike-sounding, reflection-rich acoustic space and get tracks that sound real in the first place—tracks that need no further processing.
This lifelike-sounding acoustic setup is as follows: In an otherwise dead room, set up eight to ten specular (smooth and shiny) diffusers in a horseshoe pattern on a radius of about three to four feet. Set the mic or bi or omni in the center of the reflecting array. Talent stands in the opening. Upper treble is instantly diffused and adds to the direct signal. Lower treble and bass leaks out through the spaces between the specular diffusers and does not return. The listening ear likes the acoustic version of lifelike sound.
In ADR work, the producer wants more vocal character and the engineer wants less movement. The engineer sends back a somewhat post-processed version of their voice into one earphone to help the talent get and stay on track. Working with two voices, one in either ear, adds stress.
In an acoustic, reflection rich environment, however, the talent is free to move around, using body language to help their character emote. The sound to tape remains constant in level and coloration, and the talent’s body or head movement is inaudible. Even more, the talent does not need a send. What they hear is exactly what the mic hears and exactly what goes to tape. Drop-outs and punch-ins are just as easy.
Reflection-rich acoustic conditioned space around the mic is the only way to get lifelike vocals to tape.
Art Noxon is a fully accredited Professional Acoustical Engineer with Master’s degree in both Mechanical Engineering (Acoustics) and Physics. He invented the TubeTrap in 1983. He created Acoustic Sciences Corp in 1984 to manufacture and distribute the TubeTrap. A prolific inventor, he has 12 TubeTrap related patents and has developed over 150 other acoustic devices and counting. A scientist, lecturer, writer, and teacher of acoustics, Art Noxon has presented numerous AES papers, magazine articles, white papers, lectures and classes in the field of applied acoustics.