Physics

Physics of the Ear

The physics of the ear involves the study of how sound waves are collected, transmitted, and processed by the ear. This includes the mechanics of the outer, middle, and inner ear, as well as the conversion of sound waves into electrical signals by the cochlea. Understanding the physics of the ear is crucial for developing hearing aids and other auditory technologies.

Written by Perlego with AI-assistance

8 Key excerpts on "Physics of the Ear"

  • Scott-Brown's Otorhinolaryngology and Head and Neck Surgery
    eBook - ePub

    Scott-Brown's Otorhinolaryngology and Head and Neck Surgery

    Volume 2: Paediatrics, The Ear, and Skull Base Surgery

    • John Watkinson, Ray Clarke, John C Watkinson, Ray W Clarke(Authors)
    • 2018(Publication Date)
    • CRC Press
      (Publisher)
    CHAPTER 48

    Physiology of Hearing

    Soumit Dasgupta and Michael Maslin
    Introduction Fundamentals of sound A sound wave Perceptual correlates of frequency and intensity The influence of a sound’s duration Measurement of sound Transmission of sound between different media Applied physiology of hearing The pinna The external auditory canal The middle ear The cochlea The auditory nerve and cochlear efferents References
    SEARCH STRATEGY
    The data may be updated by a PubMed search using the keywords for ‘Fundamentals of sound’: soundwaves, measuring sound, decibel scales and acoustic impedance. For ‘Applied physiology of Hearing’, use the following keywords: pinna, external auditory canal, ossicles, tympanic membrane, cochlea, outer hair cells, inner hair cells, auditory nerve and cochlear efferents.

    Introduction

    This chapter consists of two parts. The first (Fundamentals of Sound) will discuss properties of sound and important concepts related to hearing; the second part (Applied Physiology of Hearing) will discuss the applied physiology of the peripheral ear correlated to normal functioning of the ear from the hearing point of view and the aftermath of an affliction by a disease.

    Fundamentals of sound

    Michael Maslin

    A sound wave

    For something to be a source of sound it must vibrate. Sound is vibratory energy that is transmitted from the source through surrounding media in the form of pressure waves. Sound waves cannot therefore travel through a vacuum; a physical medium is required to convey the vibratory energy. In the context of otolaryngology, the most relevant physical medium is air since most sounds reach the ear by the vibrations of air molecules. Vibratory energy that arrives at, and is detected by, the ear gives rise to hearing. Hence, some understanding of the physical properties of sound is required in order to understand the way in which sound is detected by the ear.
    As a starting point one might consider a scenario whereby no force is applied to a region of air. In such a scenario, the air molecules are said to be at ambient (or atmospheric) pressure. When an object in the region vibrates, a force is applied to those air molecules that are in contact with the object, causing their displacement. For example, take the loudspeaker shown in
    Figure 48.1
    . As the diaphragm of the loudspeaker moves to the right of its centre position, the air molecules at the surface of the diaphragm are displaced to the right. This movement causes the air molecules to be pushed closer to adjacent air molecules that are further over to the right. Relative to the ambient pressure, a localized pressure increase is produced and this pressure increase is known as compression. Next, when the diaphragm of the loudspeaker moves back through its centre position and over to the left, those air molecules that were displaced to the right are now drawn to the left. When the displaced molecules reach their centre (equilibrium) position, the pressure will momentarily be equal to ambient pressure, but as the molecules move further to the left they are drawn increasingly further apart from their adjacent air molecules. The pressure will now drop below the ambient pressure and this decrease in pressure is called rarefaction. As each molecule vibrates backwards and forwards around its equilibrium position, alternating regions of compression and rarefaction arise. If these pressure variations can be detected by the ear, they can be described as sound. (It is worth noting that the variations in density of the air molecules shown in
    Figure 48.1
  • Engineering Acoustics
    eBook - ePub

    Engineering Acoustics

    Noise and Vibration Control

    • Malcolm J. Crocker, Jorge P. Arenas(Authors)
    • 2020(Publication Date)
    • Wiley
      (Publisher)
    4 Human Hearing, Speech and Psychoacoustics

    4.1 Introduction

    The human ear is a marvelous and very sensitive biomechanical system for detecting sound. If it were only slightly more sensitive, we would be able to hear the Brownian (random) motion of the air molecules and we would have a perpetual buzz in our ears! The ear has a wide frequency response from about 15 or 20 Hz to about 20 kHz. Also, the ear has a large dynamic range; the ratio of the loudest sound pressure we can tolerate to the faintest we can hear is about 10 million (107 ). There are three essential reasons to consider the ear in this book. Sound pressure levels are now so high in industrialized societies that many individuals are exposed to intense noise and permanent damage results. Large numbers of other individuals are exposed to noise from aircraft, surface traffic, construction equipment or machines and appliances, and disturbance and annoyance results. Lastly there are subjective reasons. An understanding of people's subjective response to noise allows environmentalists and engineers to reduce noise in more effective ways. The human auditory response to sound concerns the science called psychoacoustics. For example, noise should be reduced in the frequency range in which the ear is most sensitive. Noise reduction should be by a magnitude which is subjectively significant. There are several other subjective parameters which are important in hearing.

    4.2 Construction of Ear and Its Working

    The ear can be divided into three main parts (Figure 4.1 ): the outer, middle, and inner ear. The outer ear consisting of the fleshy pinna and ear canal conducts the sound waves onto the eardrum. The middle ear converts the sound waves into mechanical motion of the auditory ossicles, and the inner ear converts the mechanical motion into neural impulses which travel along the auditory nerves to the brain. The anatomy and functioning of the ear are described more completely in various other references and textbooks [1 8
  • The Art of Sound Reproduction
    • John Watkinson(Author)
    • 2012(Publication Date)
    • Routledge
      (Publisher)

    Chapter 3

    Sound and psychoacoustics

    In this chapter the characteristics of sound as an airborne vibration and as a human sensation are tied together. The direction-sensing ability of the ear is not considered here as it will be treated in detail in Chapter 7 .

    3.1 What is sound?

    There is a well-known philosophical riddle which goes ‘If a tree falls in the forest and no one is there to hear it, does it make a sound?’ This question can have a number of answers depending on the plane one chooses to consider. I believe that to understand what sound really is requires us to interpret this on many planes.
    Physics can tell us the mechanism by which disturbances propagate through the air and if this is our definition of sound, then the falling tree needs no witness. We do, however, have the problem that accurately reproducing that sound is difficult because in physics there are no limits to the frequencies and levels which must be considered.
    Biology can tell us that the ear only responds to a certain range of frequencies provided a threshold level is exceeded. If this is our definition of sound, then its reproduction is easier because it is only necessary to reproduce that range of levels and frequencies which the ear can detect.
    Psychoacoustics can describe how our hearing has finite resolution in both time and frequency domains such that what we perceive is an inexact impression. Some aspects of the original disturbance are inaudible to us and are said to be masked. If our goal is the highest quality, we can design our imperfect equipment so that the shortcomings are masked. Conversely if our goal is economy we can use compression and hope that masking will disguise the inaccuracies it causes.
    A study of the finite resolution of the ear shows how some combinations of tones sound pleasurable whereas others are irritating. Music has evolved empirically to emphasize primarily the former. Nevertheless we are still struggling to explain why we enjoy music and why certain sounds can make us happy and others can reduce us to tears. These characteristics must still be present in reproduced sound.
  • Physics And Biology: From Molecules To Life
    eBook - ePub
    • Jean Franᅢᄃois Allemand, Pierre Desbiolles(Authors)
    • 2014(Publication Date)
    • WSPC
      (Publisher)
    Physical principles of hearing P ASCAL M ARTIN, CNRS researcher, Laboratoire Physico-Chimie Curie Institut Curie and CNRS, Paris. 1 Psychophysical properties of hearing To a first approximation, the ear works as a biological analogue of a microphone: it responds to sound-evoked mechanical vibrations by producing electrical signals that travel along the auditory nerve to the brain (Fig. 7.1). Psychophysical experiments from the beginning of the twentieth century have revealed that the ear is a sound detector endowed with remarkable physical specifications. 1.1   Sensitivity, frequency selectivity and dynamic range of auditory detection The ear displays exquisite sensitivity. The faintest sounds that we can detect correspond to a deviation of the air pressure from the mean atmospheric pressure (1 atm = 10 5 Pa) of only 20 μ Pa and elicit vibrations of the tympanic membrane of less than one picometer. This amplitude is comparable to that of the Brownian movement that shakes our ears even in the absence of any sound stimulus! The ear thus operates at a physical limit imposed by its own thermal fluctuations. Interestingly, the ear is less sensitive to sounds that last less than about 200ms; longer sounds are easier to detect. In 1948, the physicist Thomas Gold recognized in this property the signature of a mechanical resonance. By accumulating mechanical energy from cycle to cycle, the amplitude of sound-evoked vibrations would grow until the threshold of detection is reached. By modeling the ear as a harmonic oscillator (a mass attached to a spring and subjected to viscous drag), Gold observed that the ear operates as a high-quality resonator, as if vibrations inside the cochlea were very lightly damped by the surrounding fluid. However, the vibrating structures are immersed in a fluid too viscous to allow for passive resonance
  • Sound for Film and Television
    • Tomlinson Holman(Author)
    • 2012(Publication Date)
    • Routledge
      (Publisher)
    The placement of the head in a sound field also matters. As children, we first hear reflections off the ground shortly after direct sound, because we are close to the ground. As we grow taller, the reflection is heard later than the direct sound. In neither case is the sound late enough to be separately perceived from the direct sound but rather affects the direct sound. The difference between the two conditions is a result of the physical acoustic differences of the conditions; nevertheless, we incorporate them into “perception.” We are, after all, continuously training our perception because we use it to function in the world all the time. We learn that a certain pattern of reflections represents our bodies standing in the real world. Let us look at thoseparts of perception that are most influenced by physical acoustics first, and then examine the more psychological aspects.

    THE PHYSICAL EAR

    The first part to consider in talking about “the ear” is actually the human body, especially the head. As stated earlier, incoming sound waves interact with the head as a physical object, with sound waves “flowing” around the head via diffraction. (A relatively minor effect even results from sound reflecting off the shoulder.) The interaction differs for various incoming angles of incidence, making at least the levels and times of arrival of the sound wave at the two ears different.
    After reaching the outer ear structure, called the pinna, various reflections and resonances occur because of the structure of the outer ear convolutions. Older treatments on how this worked concentrated on the pinna’s hornlike quality, its ability to gather sound from many directions and deliver it to the ear canal. Since the 1970s, the detailed role of the pattern of reflections and resonances caused by the pinna has come to be better understood. The details of the pinna structure play an important role in localizing sound because the pattern differs for sound arriving from different directions. We come to learn these patterns and rely on them for finding direction.
    After interaction with the head and pinna, sound waves enter the ear canal. The length and diameter of the ear canal tube cause it to be tuned, like a whistle, to the mid-high-frequency range. This ear canal resonance increases the sensitivity of hearing around 2.5 kHz by a factor of 20 dB. An increase in sensitivity in these middle-high frequencies proves highly useful in survival, because it is the range in which many threatening forces might make noise, such as the sound of a snapping twig behind us or the sound of lions moving through tall grass.
  • Communication Acoustics
    eBook - ePub

    Communication Acoustics

    An Introduction to Speech, Audio and Psychoacoustics

    • Ville Pulkki, Matti Karjalainen(Authors)
    • 2015(Publication Date)
    • Wiley
      (Publisher)
    7 Physiology and Anatomy of Hearing
    The purpose of hearing is to capture acoustic vibrations arriving at the ear and analyse the content of the signal to deliver information about the acoustic surroundings to the higher levels in the brain. The auditory system is the sensory system for the sense of hearing. According to current knowledge, the auditory system divides broadband ear-canal signals into frequency bands, and then conducts a sophisticated analysis on the bands in parallel and in sequence. The human auditory system largely resembles the hearing system in other mammals, but humans have one property that has developed much farther: the ability to analyse and recognize spoken language. As a consequence, the sensitivity to and resolution of some speech- and voice-related features of sound are very good. Thus, a major part of this book is devoted to discussing the different roles of hearing and the auditory perception in typical human communication.
    The functional properties, or the physiology of hearing, are interesting in the context of understanding communication and engineering applications, especially from a basic research point of view. This topic includes the acoustic-to-mechanic and then to neural conversion occurring in the auditory periphery and the neural functions of the auditory pathway. From a communication point of view, however, a very detailed understanding of the physiology of hearing is not necessary. The anatomical structure of hearing is also somewhat interesting, though not very important as such, except in some special cases, such as audiology or spatial hearing. Thus, a brief introduction to both the anatomy and physiology of the auditory system is considered sufficient in this chapter.

    7.1 Global Structure of the Ear

    Humans, as well as most animals, have two sensors for sound – the left and right ear – and a complex neural system to analyse the sound signals received by them. The ear, more specifically the peripheral auditory system, consists of the external ear for capturing sound waves travelling in the air, the middle ear for mechanical conduction of the vibrations, and the inner ear for mechanical-to-neural transduction. Neural signals from the periphery are transmitted through the auditory pathway
  • The Effects of Sound on People
    • James P. Cowan(Author)
    • 2016(Publication Date)
    • Wiley
      (Publisher)
    3 Sound Perception

    3.1 Introduction

    The final step in laying the foundation for an understanding of the effects of sound on people is an explanation of the human hearing mechanism. The anatomy of the human hearing mechanism is introduced, along with an explanation of the different ways in which sound waves become converted into electrical signals that are interpreted by the brain as sound. Normal hearing methods and hypersensitivities are addressed to clarify why some sounds bother some people more than others. This chapter provides an overview of the aspects of sound perception that have been explored by researchers in recent history.

    3.2 Human Hearing Apparatus and mechanism

    The anatomy associated with the human hearing mechanism provides a remarkable transduction system for the efficient conveyance of sound energy to the brain. Our sense of hearing has been crucial to our survival, mainly for communication and to react to impending danger. Modern society has minimized the need for a survival reaction in humans, but hearing is still vital to the survival of most other species.
    A transducer is a device that converts energy from one form into another. The human hearing mechanism performs multiple transductions, converting acoustic energy into mechanical energy and then into electrical energy before it is interpreted as sound by the brain. The anatomy that accomplishes these transductions is typically divided into three regions – the outer, middle, and inner ears – each being separated by the type of energy conveyed. The outer ear channels acoustic energy to the middle ear, which channels transduced mechanical energy to the inner ear, which in turn transduces mechanical into electrical energy before it is sent through the auditory nerve to the brain for processing.
    Figure 3.1
  • Phonetics
    eBook - ePub

    Phonetics

    Transcription, Production, Acoustics, and Perception

    • Henning Reetz, Allard Jongman(Authors)
    • 2011(Publication Date)
    • Wiley-Blackwell
      (Publisher)

    12

    Physiology and Psychophysics of Hearing

    For historic reasons, speech sounds are categorized according to the way they are produced. This approach to investigating speech, focusing on the production of sounds, was motivated by the limited set of methods that were available for the measurement and representation of speech. The speech production apparatus was more or less easily accessible to investigation, and the chain of events during speech production was studied by means of introspection and careful observation. The ear, on the other hand, is small and inaccessible. Examining what happens within the ear in living subjects requires advanced technical equipment. Furthermore, much of the task of “hearing” relates to processes performed in the brain, where eventually the perception of “sounds” leads to the perception of “speech.” The scientific investigation of the process of hearing is therefore relatively young, and our knowledge still incomplete in many domains.
    With very recent brain imaging techniques (including electro-encephalography [EEG], magneto-encephalography [MEG], and functional magnetic resonance imaging [fMRI]), it has become possible to observe brain activity to a limited extent when a listener hears, for example, an [ɑ] or an [i]. Other aspects of the hearing process are investigated by indirect methods, allowing only for inferences: for example, by asking a subject if she hears an [ɑ] or an [i], and relating the answer to the acoustic properties of the speech signal.
    The observation that a given phoneme can be produced with different articulatory gestures suggests that, in addition to an articulatory description, auditory and perceptual descriptions of speech sounds are important as well. For example, vowels are produced with rather different tongue positions by individual speakers, and the production of “rounded” vowels (like [u]) does not necessarily require a lip gesture after all: just stand in front of a mirror and try to produce an [u] without pursing the lips. This suggests that auditory targets (for example, the distribution of energy in a spectrum) rather than articulatory targets (like the position of the tongue) play the major role in speech perception. In other words, although it is possible to describe speech sounds in articulatory terms, and although the existing articulatory categorizations are generally quite effective, it may be advantageous to use auditory or perceptual categories.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.