Sound localization
Encyclopedia
Sound localization refers to a listener's ability to identify the location or origin of a detected sound in direction and distance. It may also refer to the methods in acoustical engineering
Acoustical engineering
Acoustical engineering is the branch of engineering dealing with sound and vibration. It is the application of acoustics, the science of sound and vibration, in technology. Acoustical engineers are typically concerned with the manipulation and control of sound....

 to simulate the placement of an auditory cue in a virtual 3D space (see binaural recording
Binaural recording
Binaural recording is a method of recording sound that uses two microphones, arranged with the intent to create a 3-D stereo sound sensation for the listener of actually being in the room with the performers or instruments. This effect is often created using a technique known as "Dummy head...

, wave field synthesis
Wave field synthesis
Wave field synthesis is a spatial audio rendering technique, characterized by creation of virtual acoustic environments. It produces "artificial" wave fronts synthesized by a large number of individually driven speakers. Such wave fronts seem to originate from a virtual starting point, the virtual...

).

The sound localization mechanisms of the human auditory system
Auditory system
The auditory system is the sensory system for the sense of hearing.- Outer ear :The folds of cartilage surrounding the ear canal are called the pinna...

 have been extensively studied.
The human auditory system uses several cues for sound source localization, including time- and level-differences between both ears, spectral information, timing analysis, correlation analysis, and pattern matching.

These cues are also used by animals, but there may be differences in usage, and there are also localization cues which are absent in the human auditory system, such as the effects of ear movements.

Sound localization by the human auditory system

Sound localization is the process of determining the location of a sound source. The brain utilizes subtle differences in intensity, spectral, and timing cues to allow us to localize sound sources. Localization can be described in terms of three-dimensional position: the azimuth or horizontal angle, the zenith or vertical angle, and the distance (for static sounds) or velocity (for moving sounds). The azimuth of a sound is signalled by the difference in arrival times between the ears, by the relative amplitude of high-frequency sounds (the shadow effect), and by the asymmetrical spectral reflections from various parts of our bodies, including torso, shoulders, and pinna
Pinna
In animal anatomy, the pinna is the visible part of the ear that resides outside of the head ....

e. The distance cues are the loss of amplitude, the loss of high frequencies, and the ratio of the direct signal to the reverberated signal. Depending on where the source is located, our head acts as a barrier to change the timbre
Timbre
In music, timbre is the quality of a musical note or sound or tone that distinguishes different types of sound production, such as voices and musical instruments, such as string instruments, wind instruments, and percussion instruments. The physical characteristics of sound that determine the...

, intensity, and spectral qualities of the sound, helping the brain orient where the sound emanated from. These minute differences between the two ears are known as interaural cues. Lower frequencies, with longer wave lengths, diffract the sound around the head forcing the brain to focus only on the phasing cues from the source. Helmut Haas discovered that we can discern the sound source despite additional reflections at 10 decibels louder than the original wave front, using the earliest arriving wave front. This principle is known as the Haas effect
Haas effect
The Haas effect is a psychoacoustic effect, described in 1949 by Helmut Haas in his Ph.D. thesis. It is often equated with the underlying precedence effect .- Experiments and findings :...

, a specific version of the precedence effect
Precedence effect
The precedence effect or law of the first wavefront is a binaural psychoacoustic effect. It means: If the same sound signal arrives time delayed at a listener from different directions, only the direction of the first arriving sound signal is perceived...

. Haas measured down to even a 1 millisecond difference in timing between the original sound and reflected sound increased the spaciousness, allowing the brain to discern the true location of the original sound. The nervous system combines all early reflections into a single perceptual whole allowing the brain to process multiple different sounds at once. The nervous system will combine reflections that are within about 35 milliseconds of each other and that have a similar intensity.

Lateral information (left, ahead, right)

For determining the lateral input direction (left, front, right) the auditory system
Auditory system
The auditory system is the sensory system for the sense of hearing.- Outer ear :The folds of cartilage surrounding the ear canal are called the pinna...

 analyzes the following ear
Ear
The ear is the organ that detects sound. It not only receives sound, but also aids in balance and body position. The ear is part of the auditory system....

 signal information:
  • Interaural time difference
    Interaural time difference
    The interaural time difference when concerning humans or animals, is the difference in arrival time of a sound between two ears. It is important in the localisation of sounds, as it provides a cue to the direction or angle of the sound source from the head. If a signal arrives at the head from one...

    s
    Sound from the right side reaches the right ear earlier than the left ear. The auditory system evaluates interaural time differences from
    • Phase delays at low frequencies
    • group delays
      Group delay and phase delay
      Group delay is a measure of the time delay of the amplitude envelopes of the various sinusoidal components of a signal through a device under test, and is a function of frequency for each component...

       at high frequencies
  • Interaural level differences
    Sound from the right side has a higher level at the right ear than at the left ear, because the head shadows the left ear. These level differences are highly frequency dependent and they increase with increasing frequency.


For frequencies below 800 Hz, mainly interaural time differences are evaluated (phase delays), for frequencies above 1600 Hz mainly interaural level differences are evaluated. Between 800 Hz and 1600 Hz there is a transition zone, where both mechanisms play a role.

Localization accuracy is 1 degree for sources in front of the listener and 15 degrees for sources to the sides. Humans can discern interaural time differences of 10 microseconds or less.

Evaluation for low frequencies

For frequencies below 800 Hz, the dimensions of the head (ear distance 21.5 cm, corresponding to an interaural time delay of 625 µs), are smaller than the half wavelength
Wavelength
In physics, the wavelength of a sinusoidal wave is the spatial period of the wave—the distance over which the wave's shape repeats.It is usually determined by considering the distance between consecutive corresponding points of the same phase, such as crests, troughs, or zero crossings, and is a...

 of the sound waves. So the auditory system can determine phase delays between both ears without confusion. Interaural level differences are very low in this frequency range, especially below about 200 Hz, so a precise evaluation of the input direction is nearly impossible on the basis of level differences alone. As the frequency drops below 80 Hz it becomes difficult or impossible to use either time difference or level difference to determine a sound's lateral source, because the phase difference between the ears becomes too small for a directional evaluation.

Evaluation for high frequencies

For frequencies above 1600 Hz the dimensions of the head are greater than the length of the sound waves. An unambiguous determination of the input direction based on interaural phase alone is not possible at these frequencies. However, the interaural level differences become larger, and these level differences are evaluated by the auditory system. Also, group delays
Group delay and phase delay
Group delay is a measure of the time delay of the amplitude envelopes of the various sinusoidal components of a signal through a device under test, and is a function of frequency for each component...

 between the ears can be evaluated, and is more pronounced at higher frequencies; that is, if there is a sound onset, the delay of this onset between the ears can be used to determine the input direction of the corresponding sound source. This mechanism becomes especially important in reverberant environment. After a sound onset there is a short time frame where the direct sound reaches the ears, but not yet the reflected sound. The auditory system uses this short time frame for evaluating the sound source direction, and keeps this detected direction as long as reflections and reverberation prevent an unambiguous direction estimation.

The mechanisms described above cannot be used to differentiate between a sound source ahead of the hearer or behind the hearer; therefore additional cues have to be evaluated.

Sound localization in the median plane (front, above, back, below)

The human outer ear
Outer ear
The outer ear is the external portion of the ear, which consists of the pinna, concha, and external auditory meatus. It gathers sound energy and focuses it on the eardrum . One consequence of the configuration of the external ear is to selectively boost the sound pressure 30- to 100-fold for...

, i.e. the structures of the pinna and the external ear canal
Ear canal
The ear canal , is a tube running from the outer ear to the middle ear. The human ear canal extends from the pinna to the eardrum and is about 35 mm in length and 5 to 10 mm in diameter....

, form direction-selective filters. Depending on the sound input direction in the median plane, different filter resonances become active. These resonances implant direction-specific patterns into the frequency response
Frequency response
Frequency response is the quantitative measure of the output spectrum of a system or device in response to a stimulus, and is used to characterize the dynamics of the system. It is a measure of magnitude and phase of the output as a function of frequency, in comparison to the input...

s of the ears, which can be evaluated by the auditory system
Auditory system
The auditory system is the sensory system for the sense of hearing.- Outer ear :The folds of cartilage surrounding the ear canal are called the pinna...

 (directional bands). Together with other direction-selective reflections at the head, shoulders and torso, they form the outer ear transfer functions.

These patterns in the ear's frequency response
Frequency response
Frequency response is the quantitative measure of the output spectrum of a system or device in response to a stimulus, and is used to characterize the dynamics of the system. It is a measure of magnitude and phase of the output as a function of frequency, in comparison to the input...

s are highly individual, depending on the shape and size of the outer ear. If sound is presented through headphones, and has been recorded via another head with different-shaped outer ear surfaces, the directional patterns differ from the listener's own, and problems will appear when trying to evaluate directions in the median plane with these foreign ears. As a consequence, front–back permutations or inside-the-head-localization can appear when listening to dummy head recording
Dummy head recording
In acoustics, dummy head recording is a method used to make binaural recordings, that allow a listener wearing headphones to perceive the directionality and the room acoustics of single or multiple sources.Human perception of the direction of a sound source is complex, and consists of:#Simple...

s,or otherwise referred to as binaural recordings.

Distance of the sound source

The human auditory system has only limited possibilities to determine the distance of a sound source. In the close-up-range there are some indications for distance determination, such as extreme level differences (e.g. when whispering into one ear) or specific pinna resonances in the close-up range.

The auditory system uses these clues to estimate the distance to a sound source:
  • Sound spectrum : High frequencies are more quickly damped by the air than low frequencies. Therefore a distant sound source sounds more muffled than a close one, because the high frequencies are attenuated. For sound with a known spectrum (e.g. speech) the distance can be estimated roughly with the help of the perceived sound.
  • Loudness: Distant sound sources have a lower loudness than close ones. This aspect can be evaluated especially for well-known sound sources (e.g. known speakers).
  • Movement: Similar to the visual system there is also the phenomenon of motion parallax
    Parallax
    Parallax is a displacement or difference in the apparent position of an object viewed along two different lines of sight, and is measured by the angle or semi-angle of inclination between those two lines. The term is derived from the Greek παράλλαξις , meaning "alteration"...

     in acoustical perception. For a moving listener nearby sound sources are passing faster than distant sound sources.
  • Reflections: In enclosed rooms two types of sound are arriving at a listener: The direct sound arrives at the listener's ears without being reflected at a wall. Reflected sound has been reflected at least one time at a wall before arriving at the listener. The ratio between direct sound and reflected sound can give an indication about the distance of the sound source.

Signal processing

Sound processing of the human auditory system is performed in so-called critical bands. The hearing range
Hearing range
For more detail on human hearing see Audiogram, Equal loudness contours and Hearing impairment.Hearing range usually describes the range of frequencies that can be heard by an animal or human, though it can also refer to the range of levels...

 is segmented into 24 critical bands, each with a width of 1 Bark
Bark scale
The Bark scale is a psychoacoustical scale proposed by Eberhard Zwicker in 1961. It is named after Heinrich Barkhausen who proposed the first subjective measurements of loudness....

  or 100 Mel
Mel scale
The mel scale, named by Stevens, Volkman and Newman in 1937is a perceptual scale of pitches judged by listeners to be equal in distance from one another. The reference point between this scale and normal frequency measurement is defined by assigning a perceptual pitch of 1000 mels to a 1000 Hz...

. For a directional analysis the signals inside the critical band are analyzed together.

The auditory system can extract the sound of a desired sound source out of interfering noise. So the auditory system can concentrate on only one speaker if other speakers are also talking (the cocktail party effect
Cocktail party effect
The cocktail party effect describes the ability to focus one's listening attention on a single talker among a mixture of conversations and background noises, ignoring other conversations. The effect enables most people to talk in a noisy place...

). With the help of the cocktail party effect sound from interfering directions is perceived attenuated compared to the sound from the desired direction. The auditory system can increase the signal-to-noise ratio
Signal-to-noise ratio
Signal-to-noise ratio is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. It is defined as the ratio of signal power to the noise power. A ratio higher than 1:1 indicates more signal than noise...

 by up to 15 dB
Decibel
The decibel is a logarithmic unit that indicates the ratio of a physical quantity relative to a specified or implied reference level. A ratio in decibels is ten times the logarithm to base 10 of the ratio of two power quantities...

, which means that interfering sound is perceived to be attenuated to half (or less) of its actual loudness
Loudness
Loudness is the quality of a sound that is primarily a psychological correlate of physical strength . More formally, it is defined as "that attribute of auditory sensation in terms of which sounds can be ordered on a scale extending from quiet to loud."Loudness, a subjective measure, is often...

.

Localization in enclosed rooms

In enclosed rooms not only the direct sound from a sound source is arriving at the listener's ears, but also sound which has been reflected
Reflection (physics)
Reflection is the change in direction of a wavefront at an interface between two differentmedia so that the wavefront returns into the medium from which it originated. Common examples include the reflection of light, sound and water waves...

 at the walls. The auditory system analyses only the direct sound, which is arriving first, for sound localization, but not the reflected sound, which is arriving later (law of the first wave front
Precedence effect
The precedence effect or law of the first wavefront is a binaural psychoacoustic effect. It means: If the same sound signal arrives time delayed at a listener from different directions, only the direction of the first arriving sound signal is perceived...

). So sound localization remains possible even in an echoic environment. This echo cancellation occurs in the Dorsal Nucleus of the Lateral Lemniscus
Lateral lemniscus
The lateral lemniscus is a tract of axons in the brainstem that carries information about sound from the cochlear nucleus to various brainstem nuclei and ultimately the contralateral inferior colliculus of the midbrain...

 (DNLL).

In order to determine the time periods, where the direct sound prevails and which can be used for directional evaluation, the auditory system analyzes loudness changes in different critical bands and also the stability of the perceived direction. If there is a strong attack of the loudness in several critical bands and if the perceived direction is stable, this attack is in all probability caused by the direct sound of a sound source, which is entering newly or which is changing its signal characteristics. This short time period is used by the auditory system for directional and loudness analysis of this sound. When reflections arrive a little bit later, they do not enhance the loudness inside the critical bands in such a strong way, but the directional cues become unstable, because there is a mix of sound of several reflection directions. As a result no new directional analysis is triggered by the auditory system.

This first detected direction from the direct sound is taken as the found sound source direction, until other strong loudness attacks, combined with stable directional information, indicate that a new directional analysis is possible. (see Franssen effect
Franssen effect
The Franssen effect is an auditory illusion where the listener incorrectly localizes a sound. It was found in 1960 by Nico V. Franssen. There are two classical experiments, which are related to the Franssen effect, called Franssen effect F1 and Franssen effect F2.-Setup:There are two speakers to...

)

Animals

Since most animals have two ears, many of the effects of the human auditory system can also be found in animals. Therefore interaural time differences (interaural phase differences) and interaural level differences play a role for the hearing of many animals. But the influences on localization of these effects are dependent on head sizes, ear distances, the ear positions and the orientation of the ears.

Lateral information (left, ahead, right)

If the ears are located at the side of the head, similar lateral localization cues as for the human auditory system can be used. This means: evaluation of interaural time differences (interaural phase differences) for lower frequencies and evaluation of interaural level differences for higher frequencies. The evaluation of interaural phase differences is useful, as long as it gives unambiguous results. This is the case, as long as ear distance is smaller than half the length (maximal one wavelength) of the sound waves. For animals with a larger head than humans the evaluation range for interaural phase differences is shifted towards lower frequencies, for animals with a smaller head, this range is shifted towards higher frequencies.

The lowest frequency which can be localized depends on the ear distance. Animals with a greater ear distance can localize lower frequencies than humans can. For animals with a smaller ear distance the lowest localizable frequency is higher than for humans.

If the ears are located at the side of the head, interaural level differences appear for higher frequencies and can be evaluated for localization tasks. For animals with ears at the top of the head, no shadowing by the head will appear and therefore there will be much less interaural level differences, which could be evaluated. Many of these animals can move their ears, and these ear movements can be used as a lateral localization cue.

Sound localization in the median plane (front, above, back, below)

For many mammals there are also pronounced structures in the pinna near the entry of the ear canal. As a consequence, direction-dependent resonances can appear, which could be used as an additional localization cue, similar to the localization in the median plane in the human auditory system.
There are additional localization cues which are also used by animals.

Head tilting

For sound localization in the median plane (elevation of the sound) also two detectors can be used, which are positioned at different heights. In animals, however, rough elevation information is gained simply by tilting the head, provided that the sound lasts long enough to complete the movement. This explains the innate behavior of cocking the head to one side when trying to localize a sound precisely. To get instantaneous localization in more than two dimensions from time-difference or amplitude-difference cues requires more than two detectors.

Localization with one ear (flies)

The tiny parasitic fly Ormia ochracea
Ormia ochracea
Ormia ochracea is a small yellow nocturnal fly, a parasitoid of crickets. It is notable because of its exceptionally acute directional hearing. The female is attracted by the song of the male cricket and deposits larvae on or around him, as was discovered in 1975 by the zoologist William H. Cade...

 has become a model organism
Model organism
A model organism is a non-human species that is extensively studied to understand particular biological phenomena, with the expectation that discoveries made in the organism model will provide insight into the workings of other organisms. Model organisms are in vivo models and are widely used to...

 in sound localization experiments because of its unique ear
Ear
The ear is the organ that detects sound. It not only receives sound, but also aids in balance and body position. The ear is part of the auditory system....

. The animal is too small for the time difference of sound arriving at the two ears to be calculated in the usual way, yet it can determine the direction of sound sources with exquisite precision. The tympanic membranes of opposite ears are directly connected mechanically, allowing resolution of sub-microsecond time differences and requiring a new neural coding strategy. Ho showed that the coupled-eardrum system in frogs can produce increased interaural vibration disparities when only small arrival time and sound level differences were available to the animal’s head. Efforts to build directional microphones based on the coupled-eardrum structure are underway.

Bi-coordinate sound localization in owls

Most owls are nocturnal or crepuscular
Crepuscular
Crepuscular animals are those that are active primarily during twilight, that is during dawn and dusk. The word is derived from the Latin word crepusculum, meaning "twilight." Crepuscular is, thus, in contrast with diurnal and nocturnal behavior. Crepuscular animals may also be active on a bright...

 birds of prey. Because they hunt at night, they must rely on non-visual senses. Experiments by Roger Payne have shown that owls are sensitive to the sounds made by their prey, not the heat or the smell. In fact, the sound cues are both necessary and sufficient for localization of mice from a distant location where they are perched. For this to work, the owls must be able to accurately localize both the azimuth and the elevation of the sound source.

ITD and ILD

Owls must be able to determine the necessary angle of descent, i.e. the elevation, in addition to azimuth (horizontal angle to the sound). This bi-coordinate sound localization is accomplished through two binaural cues: the interaural time difference
Interaural time difference
The interaural time difference when concerning humans or animals, is the difference in arrival time of a sound between two ears. It is important in the localisation of sounds, as it provides a cue to the direction or angle of the sound source from the head. If a signal arrives at the head from one...

 (ITD) and the interaural level difference (ILD), also known as the interaural intensity difference (IID). The ability in owls is unusual; in ground-bound mammals such as mice, ITD and ILD are not utilized in the same manner. In these mammals, ITDs tend to be utilized for localization of lower frequency sounds, while ILDs tend to be used for higher frequency sounds.

ITD occurs whenever the distance from the source of sound to the two ears is different, resulting in differences in the arrival times of the sound at the two ears. When the sound source is directly in front of the owl, there is no ITD, i.e. the ITD is zero. In sound localization, ITDs are used as cues for location in the azimuth. ITD changes systematically with azimuth. Sounds to the right arrive first at the right ear; sounds to the left arrive first at the left ear.

In mammals there is a level difference in sounds at the two ears caused by the sound-shadowing effect of the head. But in many species of owls, level differences arise primarily for sounds that are shifted above or below the elevation of the horizontal plane. This is due to the asymmetry in placement of the ear openings in the owl's head, such that sounds from below the owl reach the left ear first and sounds from above reach the right ear first. IID is a measure of the difference in the level of the sound as it reaches each ear. In many owls, IIDs for high-frequency sounds (higher than 4 or 5 kHz) are the principal cues for locating sound elevation.

Parallel processing pathways in the brain

The axons of the auditory nerve originate from the hair cells of the cochlea in the inner ear. Different sound frequencies are encoded by different fibers of the auditory nerve, arranged along the length of the auditory nerve, but codes for the timing and level of the sound are not segregated within the auditory nerve. Instead, the ITD is encoded by phase locking, i.e. firing at or near a particular phase angle of the sinusoidal stimulus sound wave, and the IID is encoded by spike rate. Both parameters are carried by each fiber of the auditory nerve.

The fibers of the auditory nerve innervate both cochlear nuclei
Cochlear nuclei
The cochlear nuclei are two heterogeneous collections of neurons in the mammalian brainstem that receive input from the cochlear nerve, which carry sound information from the cochleae...

 in the brainstem, the cochlear nucleus magnocellularis (mammalian anteroventral cochlear nucleus) and the cochlear nucleus angularis (see figure; mammalian posteroventral and dorsal cochlear nuclei). The neurons of the nucleus magnocellularis phase-lock, but are fairly insensitive to variations in sound pressure, while the neurons of the nucleus angularis phase-lock poorly, if at all, but are sensitive to variations in sound pressure. These two nuclei are the starting points of two separate but parallel pathways to the inferior colliculus
Inferior colliculus
The inferior colliculus is the principal midbrain nucleus of the auditory pathway and receives input from several more peripheral brainstem nuclei in the auditory pathway, as well as inputs from the auditory cortex...

: the pathway from nucleus magnocellularis processes ITDs, and the pathway from nucleus angularis processes IID.
In the time pathway, the nucleus laminaris (mammalian medial superior olive) is the first site of binaural convergence. It is here that ITD is detected and encoded using neuronal delay lines and coincidence detection, as in the Jeffress model; when phase-locked impulses coming from the left and right ears coincide at a laminaris neuron, the cell fires most strongly. Thus, the nucleus laminaris acts as a delay-line coincidence detector, converting distance traveled to time delay and generating a map of interaural time difference. Neurons from the nucleus laminaris project to the core of the central nucleus of the inferior colliculus and to the anterior lateral lemniscal nucleus.

In the sound level pathway, the posterior lateral lemniscal nucleus (mammalian lateral superior olive) is the site of binaural convergence and where IID is processed. Stimulation of the contralateral ear excites and that of the ipsilateral ear inhibits the neurons of the nuclei in each brain hemisphere independently. The degree of excitation and inhibition depends on sound pressure, and the difference between the strength of the inhibitory input and that of the excitatory input determines the rate at which neurons of the lemniscal nucleus fire. Thus the response of these neurons is a function of the difference in sound pressure between the two ears.

The time and sound-pressure pathways converge at the lateral shell of the central nucleus of the inferior colliculus. The lateral shell projects to the external nucleus, where each space-specific neuron responds to acoustic stimuli only if the sound originates from a restricted area in space, i.e. the receptive field
Receptive field
The receptive field of a sensory neuron is a region of space in which the presence of a stimulus will alter the firing of that neuron. Receptive fields have been identified for neurons of the auditory system, the somatosensory system, and the visual system....

 of that neuron. These neurons respond exclusively to binaural signals containing the same ITD and IID that would be created by a sound source located in the neuron’s receptive field. Thus their receptive fields arise from the neurons’ tuning to particular combinations of ITD and IID, simultaneously in a narrow range. These space-specific neurons can thus form a map of auditory space in which the positions of receptive fields in space are isomorphically projected onto the anatomical sites of the neurons.

Significance of asymmetrical ears for localization of elevation

The ears of many species of owl
Owl
Owls are a group of birds that belong to the order Strigiformes, constituting 200 bird of prey species. Most are solitary and nocturnal, with some exceptions . Owls hunt mostly small mammals, insects, and other birds, although a few species specialize in hunting fish...

s are asymmetrical. For example, in barn owl
Barn Owl
The Barn Owl is the most widely distributed species of owl, and one of the most widespread of all birds. It is also referred to as Common Barn Owl, to distinguish it from other species in the barn-owl family Tytonidae. These form one of two main lineages of living owls, the other being the typical...

s (Tyto alba), the placement of the two ear flaps (operculi) lying directly in front of the ear canal opening is different for each ear. This asymmetry is such that the center of the left ear flap is slightly above a horizontal line passing through the eyes and directed downward, while the center of the right ear flap is slightly below the line and directed upward. In two other species of owls with asymmetrical ears, the saw-whet Owl and the long-eared owl
Long-eared Owl
The Long-eared Owl - Asio otus is a species of owl which breeds in Europe, Asia, and North America. This species is a part of the larger grouping of owls known as typical owls, family Strigidae, which contains most species of owl...

, the asymmetry is achieved by different means: in saw whets, the skull is asymmetrical; in the long-eared owl, the skin structures lying near the ear form asymmetrical entrances to the ear canals, which is achieved by a horizontal membrane. Thus, ear asymmetry seems to have evolved on at least three different occasions among owls. Because owls depend on their sense of hearing for hunting, this convergent evolution
Convergent evolution
Convergent evolution describes the acquisition of the same biological trait in unrelated lineages.The wing is a classic example of convergent evolution in action. Although their last common ancestor did not have wings, both birds and bats do, and are capable of powered flight. The wings are...

 in owl ears suggests that asymmetry is important for sound localization in the owl.

Ear asymmetry allows for sound originating from below the eye level to sound louder in the left ear, while sound originating from above the eye level to sound louder in the right ear. Asymmetrical ear placement also causes IID for high frequencies (between 4 kHz and 8 kHz) to vary systematically with elevation, converting IID into a map of elevation. Thus, it is essential for an owl to have the ability to hear high frequencies. Many birds have the neurophysiological machinery to process both ITD and IID, but because they have small heads and low frequency sensitivity, they use both parameters only for localization in the azimuth. Through evolution, the ability to hear frequencies higher than 3 kHz, the highest frequency of owl flight noise, enabled owls to exploit elevational IIDs, produced by small ear asymmetries that arose by chance, and began the evolution of more elaborate forms of ear asymmetry.

Another demonstration of the importance of ear asymmetry in owls is that, in experiments, owls with symmetrical ears, such as the screech owl (Otus asio) and the great horned owl
Great Horned Owl
The Great Horned Owl, , also known as the Tiger Owl, is a large owl native to the Americas. It is an adaptable bird with a vast range and is the most widely distributed true owl in the Americas.-Description:...

 (Bubo virginianus), could not be trained to locate prey in total darkness, whereas owls with asymmetrical ears could be trained.

Neural interactions

In vertebrate
Vertebrate
Vertebrates are animals that are members of the subphylum Vertebrata . Vertebrates are the largest group of chordates, with currently about 58,000 species described. Vertebrates include the jawless fishes, bony fishes, sharks and rays, amphibians, reptiles, mammals, and birds...

s, inter-aural time differences are known to be calculated in the superior olivary nucleus of the brainstem. According to Jeffress, this calculation relies on delay line
Delay line
Delay line may refer to:* Propagation delay, the length of time taken for something to reach its destination* Analog delay line, used to delay a signal...

s: neuron
Neuron
A neuron is an electrically excitable cell that processes and transmits information by electrical and chemical signaling. Chemical signaling occurs via synapses, specialized connections with other cells. Neurons connect to each other to form networks. Neurons are the core components of the nervous...

s in the superior olive which accept innervation from each ear with different connecting axon
Axon
An axon is a long, slender projection of a nerve cell, or neuron, that conducts electrical impulses away from the neuron's cell body or soma....

 lengths. Some cells are more directly connected to one ear than the other, thus they are specific for a particular inter-aural time difference. This theory is equivalent to the mathematical procedure of cross-correlation
Cross-correlation
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long-duration signal for a shorter, known feature...

. However, because Jeffress' theory is unable to account for the precedence effect
Precedence effect
The precedence effect or law of the first wavefront is a binaural psychoacoustic effect. It means: If the same sound signal arrives time delayed at a listener from different directions, only the direction of the first arriving sound signal is perceived...

, in which only the first of multiple identical sounds is used to determine the sounds' location (thus avoiding confusion caused by echoes), it cannot be entirely used to explain the response. Furthermore, a number of recent physiological observations made in the midbrain and brainstem of small mammals have shed considerable doubt on the validity of Jeffress' original ideas

Neurons sensitive to ILDs are excited by stimulation of one ear and inhibited by stimulation of the other ear, such that the response magnitude of the cell depends on the relative strengths of the two inputs, which in turn, depends on the sound intensities at the ears.

In the auditory midbrain nucleus, the inferior colliculus
Inferior colliculus
The inferior colliculus is the principal midbrain nucleus of the auditory pathway and receives input from several more peripheral brainstem nuclei in the auditory pathway, as well as inputs from the auditory cortex...

 (IC), many ILD sensitive neurons have response functions that decline steeply from maximum to zero spikes as a function of ILD. However, there are also many neurons with much more shallow response functions that do not decline to zero spikes.

See also

  • Binaural fusion
    Binaural fusion
    Binaural fusion is a cognitive process that involves the "fusion" of different auditory information presented binaurally, or to each ear. In humans, this process is essential in understanding speech as one ear may pick up more information about the speech stimuli than the other...

  • Acoustic location
    Acoustic location
    Acoustic location is the science of using sound to determine the distance and direction of something. Location can be done actively or passively, and can take place in gases , liquids , and in solids .* Active acoustic location involves the creation of sound in order to produce an echo, which is...

  • Animal echolocation
    Animal echolocation
    Echolocation, also called biosonar, is the biological sonar used by several kinds of animals.Echolocating animals emit calls out to the environment and listen to the echoes of those calls that return from various objects near them. They use these echoes to locate and identify the objects...

  • Coincidence detection in neurobiology
    Coincidence detection in neurobiology
    Coincidence detection in the context of neurobiology is a process by which a neuron or a neural circuit can encode information by detecting the occurrence of timely simultaneous yet spatially separate input signals...

  • Human echolocation
    Human echolocation
    Human echolocation is the ability of humans to detect objects in their environment by sensing echoes from those objects. By actively creating sounds – for example, by tapping their canes, lightly stomping their foot or making clicking noises with their mouths – people trained to orientate with...

  • Psychoacoustics
    Psychoacoustics
    Psychoacoustics is the scientific study of sound perception. More specifically, it is the branch of science studying the psychological and physiological responses associated with sound...


External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK