Wave field synthesis
Encyclopedia
Wave field synthesis is a spatial audio rendering technique, characterized by creation of virtual acoustic environments. It produces "artificial" wave fronts synthesized by a large number of individually driven speakers. Such wave fronts seem to originate from a virtual starting point, the virtual source or notional source. Contrary to traditional spatialization techniques such as stereo
STEREO
STEREO is a solar observation mission. Two nearly identical spacecraft were launched into orbits that cause them to respectively pull farther ahead of and fall gradually behind the Earth...

, the localization of virtual sources in WFS does not depend on or change with the listener's position.

Physical fundamentals

WFS is based on Huygens' Principle, which states that any wave front can be regarded as a superposition of elementary spherical waves. Therefore, any wave front can be synthesized from such elementary waves. In practice, a computer controls a large array of individual loudspeakers and actuates each one at exactly the time when the desired virtual wave front would pass through it.

The basic procedure was developed in 1988 by Professor Berkhout at the Delft University of Technology
Delft University of Technology
Delft University of Technology , also known as TU Delft, is the largest and oldest Dutch public technical university, located in Delft, Netherlands...

.[1] Its mathematical basis is the Kirchhoff-Helmholtz integral. It states that the sound pressure is completely determined within a volume free of sources, if sound pressure and velocity are determined in all points on its surface.


Therefore any sound field can be reconstructed, if sound pressure and acoustic velocity are restored on all points of the surface of its volume. This approach is the underlying principle of Holophony.

For reproduction, the entire surface of the volume would have to be covered with closely spaced monopole and dipole loudspeakers, each individually driven with its own signal. Moreover, the listening area would have to be anechoic, in order to comply with the source-free volume assumption. In practice, this is hardly feasible.

According to Rayleigh II
Rayleigh
Rayleigh may refer to:*Rayleigh scattering*Rayleigh–Jeans law*Rayleigh waves*Rayleigh , named after the son of Lord Rayleigh*Rayleigh criterion in angular resolution*Rayleigh distribution*Rayleigh fading...

 the sound pressure is determined in each point of a half-space, if the sound pressure in each point of its dividing plane is known. Because our acoustic perception is most exact in the horizontal plane, practical approaches generally reduce the problem to a horizontal loudspeaker line, circle or rectangle around the listener.

The origin of the synthesized wave front can be in any point on the horizontal plane of the loudspeakers. It represents the virtual acoustic source, which hardly differs from a material acoustic source at the same position. Unlike conventional (stereo) reproduction, the virtual sources do not move along if the listener moves in the room.
For sources behind the loudspeakers, the array will produce convex wave fronts. Sources in front of the speakers can be rendered by concave wave fronts that focus in the virtual source and diverge again. Hence the reproduction inside the volume is incomplete - it breaks down if the listener sits between speakers and inner source.

Procedural advantages

By means of level and time information stored in the impulse response of the recording room or derived from a model-based mirror-source approach, a sound field with very stable position of the acoustic sources can be established by wave field synthesis. In principle, it would be possible to establish a virtual copy of a genuine sound field indistinguishable from the real sound. Changes of the listener position in the rendition area would produce the same impression as an appropriate change of location in the recording room. Listeners are no longer relegated to a "sweet spot" area within the room.

The Moving Picture Expert Group standardized the object-oriented transmission standard MPEG-4
MPEG-4
MPEG-4 is a method of defining compression of audio and visual digital data. It was introduced in late 1998 and designated a standard for a group of audio and video coding formats and related technology agreed upon by the ISO/IEC Moving Picture Experts Group under the formal standard ISO/IEC...

 which allows a separate transmission of content (dry recorded audio signal) and form (the impulse response or the acoustic model).
Each virtual acoustic source needs its own (mono) audio channel. The spatial sound field in the recording room consists of the direct wave of the acoustic source and a spatially distributed pattern of mirror acoustic sources caused by the reflections by the recording room surfaces. Reducing that spatial mirror source distribution onto a few transmitting channels causes a significant loss of spatial information. Much more accurately this spatial distribution can be synthesized by the rendition side.

Concerning the conventional channel-orientated rendition procedures, WFS provides a clear advantage: "Virtual panning spots" called virtual acoustic sources guided by the signal content of the associated channels can be positioned far beyond the material rendition area. That reduces the influence of the listener position because the relative changes in angles and levels are clearly smaller as with closely fixed material loudspeaker boxes. This extends the sweet spot considerably; it can now nearly cover the entire rendition area. The procedure of the wave field synthesis thus is not only compatible, it clearly improves the reproduction for the conventional transmission methods.

Remaining problems

The most perceptible difference concerning the original sound field is the reduction of the sound field to two dimensions along the horizontal of the loudspeaker lines. This is particularly noticeable for reproduction of ambiance as acoustic damping is required in the rendition area for accurate synthesis. The damping, however, does not compliment natural acoustic sources.

Sensitivity to room acoustics

Since WFS attempts to simulate the acoustic characteristics of the recording space, the acoustics of the rendition area must be suppressed. One possible solution is to arrange the walls in an absorbing and non-reflective way. The second possibility is playback within the near field. For this to work effectively the loudspeakers must couple very closely at the hearing zone or the diaphragm surface must be very large.

High cost

A further problem is high cost. A large number of individual transducers must be very close together. Otherwise spatial Aliasing effects becomes audible. This is a result of having a finite number of transducers (and hence elementary waves).

Aliasing

There are undesirable spacial distortions caused by position-dependent narrow-band break-downs in the frequency response within the rendition range – in a word, aliasing
Aliasing
In signal processing and related disciplines, aliasing refers to an effect that causes different signals to become indistinguishable when sampled...

. Their frequency depends on the angle of the virtual acoustic source and on the angle of the listener to the loudspeaker arrangement:


For aliasing free rendition in the entire audio range thereafter a distance of the single emitters below 2 cm would be necessary. But fortunately our ear is not particularly sensitive to spatial aliasing. A 10-15 cm emitter distance is generally sufficient. On the other hand the size of the emitter field does limit the representation range; outside of its borders no virtual acoustic sources be produced.

Truncation effect

Another cause for disturbance of the spherical wavefront is the "Truncation Effect". Because the resulting wavefront is a composite of elementary waves, a sudden change of pressure can occur if no further speakers deliver elementary waves where the speaker row ends. This causes a 'shadow-wave' effect. For virtual acoustic sources placed in front of the loudspeaker arrangement this pressure change hurries ahead of the actual wave front whereby it becomes clearly audible.

In signal processing
Signal processing
Signal processing is an area of systems engineering, electrical engineering and applied mathematics that deals with operations on or analysis of signals, in either discrete or continuous time...

 terms, this is spectral leakage
Spectral leakage
Spectral leakage is an effect in the frequency analysis of finite-length signals or finite-length segments of infinite signals where it appears as if some energy has "leaked" out of the original signal spectrum into other frequencies....

 in the spatial domain and is caused by application of a rectangular function as a window function
Window function
In signal processing, a window function is a mathematical function that is zero-valued outside of some chosen interval. For instance, a function that is constant inside the interval and zero elsewhere is called a rectangular window, which describes the shape of its graphical representation...

 on what would otherwise be an infinite array of speakers.
The shadow wave can be reduced if the volume of the outer loudspeakers is reduced; this corresponds to using a different window function which tapers off instead of being truncated – see the discussion in spectral leakage
Spectral leakage
Spectral leakage is an effect in the frequency analysis of finite-length signals or finite-length segments of infinite signals where it appears as if some energy has "leaked" out of the original signal spectrum into other frequencies....

 and window function
Window function
In signal processing, a window function is a mathematical function that is zero-valued outside of some chosen interval. For instance, a function that is constant inside the interval and zero elsewhere is called a rectangular window, which describes the shape of its graphical representation...

 articles for how choice of window function affects signal response.

Research and market maturity

Early development of WFS was started in from 1988 by the Delft University. Further work was carried out in the context of the CARROUSO project, promoted by the European Union (January 2001 to June 2003). In Europe, ten institutes were included in this research. The WFS sound system IOSONO
Iosono
Iosono is the product name of an audio system presented by the Fraunhofer Institute and the Iosono GmbH in 2004. It is based on wave field synthesis, a method to use secondary audio sources to recreate primary wave fields, that was developed at Delft University of Technology in the Netherlands in...

 was developed by the Fraunhofer Institute for digital media technology (IDMT) by the technical University of Ilmenau
Ilmenau
Ilmenau is a town located in the district of Ilm-Kreis, Thuringia, Germany.Ilmenau is situated in the valley of the Ilm river, at an altitude of 431 metres above sea level, and is the biggest town in Ilm-Kreis district, with 6,200 students studying at the Technische Universität Ilmenau. The...

.

Loudspeaker arrays implementing WFS have been installed in some cinemas and theatres and in public range with good success. The first live WFS transmission was on July 2008 from the Cologne cathedral
Cologne Cathedral
Cologne Cathedral is a Roman Catholic church in Cologne, Germany. It is the seat of the Archbishop of Cologne and the administration of the Archdiocese of Cologne. It is renowned monument of German Catholicism and Gothic architecture and is a World Heritage Site...

 lecture hall 104 by the Technical University of Berlin
Technical University of Berlin
The Technische Universität Berlin is a research university located in Berlin, Germany. Translating the name into English is discouraged by the university, however paraphrasing as Berlin Institute of Technology is recommended by the university if necessary .The TU Berlin was founded...

. The room contains the world’s largest speaker system with 2700 loudspeakers on 832 independent channels.

Development of home-audio application of WFS has only recently begun. In spite of the efforts, large acceptance problems remain.

External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK