Quoted from [Sav95]

A higher-dimensional waveguide mesh is a regular array of 1-D waveguides arranged along each perpendicular dimension, interconnected at their crossings. Two conditions must be satisfied at a lossless junction connecting lines of equal impedance: (1) the sum of inputs equals the sum of outputs (flows add to zero) and (2) the signals in each crossing waveguide are equal at the junction (continuity of impedances). Based on these difference equation can be derived for the nodes of an N-dimensional rectangular mesh:

where represents the signal pressure at a junction at time step n, k is the position of the junction to be calculated and l represents all the neighbors of k. This waveguide equation is equivalent to a difference equation derived from the Helmholtz equation by discretizing time and space. Boundary conditions can be modeled by adding special termination nodes to the ends of each waveguide. These nodes have only one neighbor and thus behave as in 1-D waveguides. An open end with zero impedance corresponds to binding a node to zero:

, and produces a phase-reversing reflection. Walls that make phase-preserving reflections have an infinite impedance equation:

. Anechoic walls would have boundaries of matched impedance corresponding to a mesh that continues to infinity. We approximate this situation with termination nodes of a one-dimensional waveguide::

Theoretically, waves in a 3D mesh propagate through one diagonal unit in three time steps. Thus the simulation time step must be::

, where d is the distance between two nodes in the mesh and c is the speed of sound in the medium. For example, if d equals to 10 cm, the update frequency of the mesh is approximately 6kHz. An inherent problem with finite difference methods is the dispersion of wavefronts. High frequency signals along the coordinate axes are delayed, whereas diagonally the waves propagate undistorted. For this reason the model is valid only at frequencies well below the update frequency of the mesh. Possibilities to reduce this effect are to use a denser mesh or higher order difference equations, which both would increase computation times.

Acoustical signal coming from different directions to a listener interacts with parts of the listener's body before it reaches eardrums in both ears. As a result of this interaction the sound reaches eardrums modified by echoes from the listener's shoulders, by interaction with the head, by the pinna response and by the resonance in the auditory canal. We can say that the body has a filtering effect on the incoming sound. Because of the speed of sound in air the significant inter-aural time delay can be noticed depending on the sound source position (in frequencies below 200Hz it is perceived as a phase shift of the sound). The sound distortion depends on sound source position (relatively to the head). The HRTF is a function of azimuth and elevation of sound source that describes the filtering effect of ``virtual user's'' body on sound coming from the given direction. For certain azimuth and elevation HRTF gives us two sets of parameter for numeric filters each for one ear. The sound that results from application of these filters on the original sound seems to the listener as if it was coming from outside of his/her head and from a particular direction.

Values for numeric filters (Finite impulse response filters for example) can be measured with a miniature probe placed near the eardrum. Impulse response of sounds that came from changing position can be then directly applied in FIR filters. There have been also successful attempts to simulate the HRTF.

HRTF of different persons varies, but an averaged HRTF of a few subject can perform well for most of the users. The best results though are achieved with ``personal'' HRTF measured on the user. The sign that a HRTF is not good for a particular user is the loss of externalization - the feeling of sound being placed outside of the head.