Modulation in reverbs: reality and unreality

The use of modulation in digital reverbs dates back to the first commercial digital reverberators. The EMT250 used an enormous amount of modulation, to the point where it sounded like a chorus unit. Lexicon’s 224 reverberator incorporated what they called “chorus” into the algorithms, working along principles not dissimilar to the string ensembles in use at the time. The Ursa Major Space Station was based around an unstable feedback arrangement, that relied upon randomization to achieve longer decay times without self-oscillating.

Recently, Barry Blesser has written about randomization in his book, “Spaces Speak: Are You Listening?” Blesser argues that thermal variations in most real-world acoustic spaces results in small variations of the speed of sound within those spaces. Multiply this by several orders of reflections, and the result is an acoustic space that is naturally time varying. Blesser goes on to argue that random time variation in algorithmic reverbs emulates the realities of an acoustic space more accurately than time-invariant convolution reverbs.

Blesser makes a convincing argument, but I am not convinced that the heavy amounts of delay modulation used in the older reverbs makes for a more “realistic” space. The randomization in the older algorithms does a nice job in masking the periodic artifacts that can be found when using a small amount of delay memory. However, the depth of modulation used in the old units goes far beyond what can be heard in any “real world” acoustic space. The thermal currents in a symphony hall will result in a slight spread of frequencies as the sound decays, but will not create the extreme chorusing and detuning found in the EMT250, or in the Lexicon algorithms with high levels of Chorus.

Having said that, I would argue that the strengths of algorithmic reverbs is not in emulating “real” acoustic spaces, but in creating new acoustic spaces that never existed before. Blesser recently said that the marketing angle of the EMT250 was to reproduce the sound of a concert hall, but later describes the EMT250 in terms of a “pure effect world.” The early digital reverbs, in the hands of sonic innovators such as Brian Eno and Daniel Lanois, were quickly put towards the goal of generating an unreal ambience, where sounds hang in space, slowly evolving and modulating. Listen to Brian Eno’s work with Harold Budd, on “The Plateaux of Mirror,” to hear the long ambiences and heavy chorusing of the EMT250 in action. A later generation of ambient artists made heavy use of the modulated reverb algorithms in such boxes as the Alesis Quadraverb to create sheets of sound, that bear little resemblance to any acoustic space found on earth.

Creating these washy, chorused, “spacey” reverbs has been a pursuit of mine since 1999. My early Csound work explored relatively simple feedback delay networks, with randomly modulated delay lengths, in order to achieve huge reverb decays that turn any input signal into “spectral plasma” (a term lifted from Christopher Moore, the Ursa Major reverb designer). With my more recent work, I have tried to strike a balance between realistic reverberation, and the unrealistic sounds of the early digital units. The plate algorithms in Eos are an attempt to emulate the natural exponential decay of a metal plate, but were also inspired by my understanding of the EMT250. The Superhall algorithm in Eos was not attempting to emulate any “natural” space, but rather the classic early digital hall algorithms, with heavy randomization, nonlinear build of the initial reverberation decay, and the possibility of obtaining near infinite decays. The “real” world continues to be a source of inspriation for my algorithms, but I find myself more attracted to the unreal side.

Comments (2)

  • Gregg Orenstein

    Blesser’s argument seems to be a ray-tracing problem (with reflective surfaces). If he has taken measurements which show a demonstrable variation in arrival times from multiple sources, due to air density/directional differences resulting from thermal causes, then he’s on to something (so long as they manifest within the range of human perception). But until such tests are made, it’s hard to know what proportion those effects have compared to the chaotic but usually bounded location and orientation of sound sources themselves (no musician is perfectly motionless). The distinct disadvantage of IRs is that they hold both room state, as well as source location and direction, as static. The location and ever-changing positioning of a musician’s body also has to be factored in as an acoustical element, further complicating matters.

    Blesser’s idea definitely has the flash of brilliance; but until his accounting for random variation in room response is taken as a ratio against other dynamic causes, it remains only that.

    • Just saw your in-depth reply. Anyway, Blesser himself does not seem to have conducted his own studies on temperature related / air density changes in the speed of sound, but he cites several in his 2001 AES paper and his recent book. Some of the studies were conducted outdoors, and would take wind speed into account, while others were done in large temperature controlled rooms, where a temperature differential was introduced with a heated wire. The resulting differences in the speed of sound were very pronounced.

      I haven’t seen any studies about how musicians moving in a room would affect the sound. My guess is that the movement of musicians will cause changes in the reflection pattern, but that the speed of motion will not be pronounced enough to cause a large Doppler shift. The Doppler shift from the speed of sound changes is pretty small, but it grows exponentially as the reflections increase in order, due to the speed changes being in the medium, rather than from the sound source.

Leave a Reply

Your email address will not be published. Required fields are marked *