Engineering Acoustics/Echolocation in Bats and Dolphins

Echolocation is a form of acoustics that uses active sonar to locate objects. Many animals, such as bats and dolphins, use this method to hunt, to avoid predators, and to navigate by emitting sounds and then analyzing the reflected waves. Animals with the ability of echolocation rely on multiple receivers to allow a better perception of the objects’ distance and direction. By noting a difference in sound level and the delay in arrival time of the reflected sound, the animal determines the location of the object, as well as its size, its density, and other features. Humans with visual disabilities are also capable of applying biosonar to facilitate their navigation. This page will focus mainly on how echolocation works in bats and dolphins.

Sound Reflection


When a wave hits an obstacle, it doesn't just stop there but rather, it gets reflected, diffracted, and refracted. Snell's law of reflection states that:


 * $$\frac{sin \theta_i}{c_1}=\frac{sin \theta_t}{c_2}=\frac{sin \theta_r}{c_1}$$

where the nomenclature is defined in Figure 1. The law of reflection states the angle of incidence is equal to the angle of reflection ($$\theta_i=\theta_r$$) which is clearly shown in the previous equation.

In order to determine the reflection coefficient $$R$$, which determines the proportion of the wave is being reflected, the acoustic impedance is needed and is define as where $$c$$ is the speed of sound and $$\rho$$ is the density of the medium:


 * $$Z=\rho c$$

For fluids only, the sound reflection coefficient is defined in terms of the incidence angle and the characteristic impedance of the two media as [3]:


 * $$R=\frac{\frac{Z_2}{Z_1}-\sqrt{1-[n-1]\tan^2 \theta_1}}{\frac{Z_2}{Z_1}+\sqrt{1-[n-1]\tan^2 \theta_1}}\qquad$$     where $$n=\left ( \frac{c_2}{c_1} \right )^2$$

As for the case where medium 2 is a solid, the sound reflection coefficient becomes [9]:


 * $$R=\frac{(r_n-\frac{r_1}{cos \theta_i})+jx_n}{(r_n+\frac{r_1}{cos \theta_i})+jx_n}\qquad$$     where $$Z_n=r_n+jx_n$$ is the normal specific acoustic impedance.

The law of conservation of energy states that the total amount of energy in a system is constant therefore if it is not being reflected, it is either diffracted, or transmitted into the second medium which may be refracted due to a difference in refraction index.

Sound Localization
Sound localization denotes the ability to localize the direction and distance of an object, or "target" based on the detected sound and where it originates from. Auditory systems of humans and animals alike use the following different cues for sound location: interaural time differences and interaural level differences between both ears, spectral information, and pattern matching [8].

To locate sound from the lateral plane (left, right, and front), the binaural signals required are:
 * Interaural time differences: for frequencies below 800 Hz
 * Interaural level differences: for frequencies above 1600 Hz
 * Both: for frequencies between 800 and 1600 Hz

Interaural Time Differences


Humans and many animals uses both ears to help identify the location of the sound; this is called binaural hearing. Depending on where the sound comes from, it will reach either the right or left ear first, therefore allowing the auditory system to evaluate the arrival times of the sound at the two reception points. This phase delay is the interaural time difference. The relationship between the difference between the length of sound paths at the two ears, $$\Delta d$$, and their angular position, $$\theta$$, may be calculated using the equation [1]:


 * $$\Delta d=r(\theta + sin \theta)$$

where $$r$$ is the half distance between the ears. This is mainly used as cue for the azimuthal location. Thus, if the object is directly in front of the listener, there is no interaural time difference. This cue is used at low frequencies because the size of the head is less than half the wavelength of the sound which allows a noticeable detection in phase delays between both ears. However, when the frequencies are below 80 Hz, the phase difference becomes so small that locating the direction of the sound source becomes extremely difficult.

Interaural level difference
As the frequency increases above 1600 Hz, the dimension of the head is greater than the wavelength of the sound wave. Phase delays no longer allow to detect the location of the sound source. Hence, the difference in sound level intensity is used instead. Sound level is inversely proportional to the source-receiver distance, given that the closer you are to the emitting sound, the higher the sound intensity. This is also influenced greatly by the acoustic shadow cast by the head. As depicted in Figure 3, the head blocks sound, decreasing the sound intensity coming from the source[4].

Active Sonar
Active sonar supplies their own source and then wait for echoes of the target's reflected waves. Bats and dolphins use active sonar for echolocation. The system begins with a signal produced at the transmitter with a source level (SL). This acoustic wave has an intensity of I(r) where r is the distance away from the source. Next, the source signal travels to the target while gathering a transmission loss (TL). Once arriving at the target, the fraction of the initial source signal which is denoted by the target strength (TS), is reflected toward the receiver. On the way to the receiver, another transmission loss (TL') is experienced. For a monostatic case where the source and the receiver are located in the exact position, TL is equal to TL', thus, the echo level (EL) is written as [9]:


 * $$ EL=10\log \frac{I(r) \sigma}{I_{ref} 4 \pi}-TL'$$

The equation for target strength is [9]:


 * $$TS=10\log\frac{\sigma}{4\pi}$$

Reverberation
As sound is emitted, other objects in the environment can cause the signal to be scattered creating different echoes in addition to the echo produced by the target itself. If we look at underwater for instance, reverberation can result from bubbles, fish, sea surface and bottom, or planktons. These background signals mask the echo from the target of interest, thus, it is necessary to find the reverberation level (RL) in order to differentiate between the echo level. The equation for RL is as follow:

$$TS_R$$ represents the target strength for the reverberating region and is defined by:


 * $$TS_R=S_v+10\log V=S_A+10\log A$$

where "$$V$$ (or $$A$$) is the volume (or surface area) at the range of the target from which scattered sound can arrive at the receiver during the same time as the echo from the desired target" [9] and $$S_v$$ (or$$S_A$$) is the scattering strength for a unit volume (or a unit surface area).

Echolocation in Bats
Bats produce sounds through their larynx, emitting them from the mouth or some, via their nose. Their calls consist of various types: broadband components (varies in frequencies), pure tone signals (constant frequency), or a mix of both components. The duration of these sounds varies between 0.3 and 100 ms over a frequency range between 14 and 100 kHz [7]. Each species’ calls can vary and have been adapted in a particular way due to their lifestyles and hunting habits.

The broadband components are used for hunting in a closed environment with background noise. The short calls yield precision in locating the target. The short rapid calls, also prevent overlapping of waves, thus, allowing the use of interaural time difference. Pure tones signals are used while hunting in open environments without much background noise. The calls are longer in duration allowing bats to locate preys at greater distances. When searching for preys, bats emit sounds 10 to 20 times per second. As they approach their target, the number of waves emitted can reach up to 200 times per second. The usual range for echolocation is around 17 m.

For other mammals, the interaural time difference and the interaural intensity level are cues used only for lateral detection. However bats can also use interaural intensity level to establish objects in the vertical direction. This is only applicable if the signals received are broadband. Another difference is that bats’ ears are capable of moving, thus, allowing them to change between different acoustic cues.

The sound conducting apparatus in bats is similar to that of most mammals. However, over years of evolution, they have been adapted to suit their needs. One of these special characteristics is the large pinnae which serves the purpose of acoustic antennae and mechanical amplifiers. The movement of the pinnae permits focusing of the coming sound wave, amplifying or weakening it.

Echolocation in Dolphins


The basic idea of echolocation is comparable between bats and dolphins, however, since both animals live in such different environment, there are specific characteristics that differ amongst them.

Dolphins use the nasal-pharyngeal area to produce various types of sounds (clicks, burst pulses, and whistles) in order to achieve two main functions: echolocation and communication. Clicks, slow rate pulses that lasts 70-250 μs at a frequencies of 40 to 150 Hz, and bursts, pulses produced at rapid rates, are primarily used for echolocation [1]. After the clicks have been produced, the associated sound waves travel through the melon, the rounded area of the dolphin's forehead comprised of fats. Its function is to act as an acoustic lens to focus the produced waves into a beam, sending it ahead. At such high frequencies, the wave doesn't travel very far in water, hence, echolocation is most effective at a distance of 5 to 200 m for preys that are 5 to 15 cm in length [6].

When waves get reflected after hitting an object, unlike bats that has pinnae to direct the waves to the inner ear, dolphins receive this signal via the fat-filled cavities of the lower jaw bones (Figure 6). Due to the high acoustic impedance of water, these soft body tissues also have a similar impedance allowing sound waves to travel to the inner ear without being reflected. Sound is then conducted to the middle ear and inner ear, from which is then transferred to the brain.