Languages & Linguistics

Phonotactics

Phonotactics refers to the study of the permissible combinations of phonemes in a particular language. It examines the rules and patterns governing the arrangement of sounds within words, including syllable structure, consonant clusters, and permissible phoneme sequences. Phonotactics vary across languages and play a crucial role in shaping the phonological structure and pronunciation of words.

Written by Perlego with AI-assistance

6 Key excerpts on "Phonotactics"

  • Routledge Dictionary of Language and Linguistics
    • Hadumod Bussmann, Kerstin Kazzazi, Gregory Trauth, Kerstin Kazzazi, Gregory Trauth(Authors)
    • 2006(Publication Date)
    • Routledge
      (Publisher)
    Recent developments in historical phonology. The Hague.
    King, R. 1969. Historical linguistics and generative grammar. Englewood Cliffs. NJ.
    Martinet, A. 1955. Economie des changements phonétiques. Bern.
    ⇒ phonetics, syllable

    phonostylistics

    A branch of stylistics which investigates the expressively stylistic properties of articulation and intonation.

    Phonotactics

    Study of the sound and phoneme combinations allowed in a given language. Every language has specific phonotactic rules that describe the way in which phonemes can be combined in different positions (initial, medial, and final). For example, in English the stop+fricative cluster /gz/ can only occur in medial (exhaust) or final (legs), but not in initial position, and /h/ can only occur before, never after, a vowel. The restrictions are partly language-specific and partly universal.

    References

    ⇒ phonology

    phonotagm

    Phonotactic unit (⇒ Phonotactics) that concerns the phonological structure of morphemes as phoneme combinations. Phonotagms are morphologically relevant phoneme combinations that—in contrast with phonotagmemes– are not semantically relevant, e.g. devoicing of voiced stops after a voiceless consonant (fished).

    phonotagmeme

    Phonotactic unit (⇒ etic vs emic analysis) that constitutes a morphologically relevant combination of phonemes on the level of parole (⇒ langue vs parole) and which—in contrast to phonotagms—is semantically distinct from other phonotagms, e.g. ablaut in sing vs song. (⇒ also Phonotactics)

    phrase
    [Grk phrásis ‘expression’]

    1 Term for word groups without a finite verb that belong together syntactically. In contrast, the term ‘clause’ denotes a syntactic construction with a finite verb; thus clause stands hierarchically between phrase and sentence. (⇒ X-bar theory)
    2 In phrase structure grammar, the term ‘phrase’ stands for a set of syntactic elements which form a constituent (=relatively independent group of words). The most important phrases are noun phrases (consisting of nominal expressions with corresponding attributive modifiers: Philip, good old Philip, he, Philip, who is a dreamer), verb phrases (dreams, sees the fire, thinks that he’s right)
  • Causes and Consequences of Word Structure
    • Jennifer Hay(Author)
    • 2004(Publication Date)
    • Routledge
      (Publisher)

    CHAPTER 2

    Phonotactics and Morphology in Speech Perception

    English speaking adults and infants use Phonotactics to segment words from the speech stream. The goal of this chapter is to demonstrate that this strategy necessarily affects morphological processing.
    After briefly reviewing the evidence for the role of Phonotactics in speech perception (2.1), I discuss the results of Experiment 1 – the implementation of a simple recurrent network (2.3). This network is trained to use Phonotactics to spot word boundaries, and then tested on a corpus of multimorphemic words. The learning transfers automatically to the spotting of (certain) morpheme boundaries. Morphologically complex words in English cannot escape the effects of a prelexical, Phonotactics-based segmentation strategy.
    Having illustrated that segmentation at the word and morpheme level cannot be independent, I present the results of Experiment 2, which show that listeners can, indeed, use Phonotactics to segment nonsense words into component “morphemes.”

    2.1. Phonotactics in Speech Perception

    There is a rapidly accumulating body of evidence that language specific phonotactic patterns affect speech perception. Phonotactics have been shown to affect the placement of phoneme category boundaries (Elman and McClelleland 1988), performance in phoneme monitoring tasks (Otake et al. 1996), segmentation of nonce forms (Suomi et al. 1997), and perceived well-formedness of nonsense forms (Pierrehumbert 1994, Coleman 1996, Vitevitch et al. 1997, Treiman et al. 2000, Frisch et al. 2000, Hay et al. to appear, and others).
    Several of these results are gradient, indicating that speakers are aware of, and exploit, the statistics of their lexicon. Such statistics also appear to play a vital role in the acquisition process. Jusczyk et al. (1994) show that nine month old infants prefer frequent phonotactic patterns in their language to infrequent ones. Saffran, Aslin and Newport (1996) show that, when presented with a string of nonsense words, eight month old infants are sensitive to transitional probabilities in the speech stream. This is also true of adults (Saffran, Newport and Aslin 1996). This result is important because it suggests that sensitivity to probabilistic Phonotactics plays a role in the segmentation of speech. McQueen (1998) and van der Lugt (1999) provide further evidence that Phonotactics are exploited for the task of locating word boundaries.
  • A Concise Introduction to Linguistics
    • Bruce M. Rowe, Diane P. Levine(Authors)
    • 2022(Publication Date)
    • Routledge
      (Publisher)
    CHAPTER 3 Phonology: the sound patterns used in languages
    DOI: 10.4324/9781003268369-3
    LEARNING OBJECTIVES
    • Explain the difference in the meanings of the terms phonetics and phonology.
    • Define the term phoneme. Define the term allophone.
    • Analyze the statement: “Phonemes and allophones are considered mental constructs rather than being defined in terms of their specific physical properties.”
    • Describe how a language’s phonemes are determined.
    • Define the term distinctive feature. Explain how distinctive feature analysis helps us understand the systematic aspects of language.
    • List the two major classes of phonological processes and explain how they differ from each other.
    • Analyze the statement “Speech includes redundant features.”
    • Discuss the meaning of the term markedness.
    One lesson gained from phonetics is that humans can produce a considerable variety of speech sounds. Yet each language limits the number of speech sounds that it uses. The sounds are organized into sound systems. Although the sound system of each language differs, some interesting general patterns are found in languages throughout the world. These sound system universals are discussed later in this chapter.
    Phonetics, the subject of Chapter 2 , deals with the nature of speech sounds. Phonology is concerned with the factors that make language a system; that is, with the systems used to organize speech sounds. We will begin this chapter with a look at the concept of the phoneme.
    Phonology is the study of the sound system of a language, that is, what sounds are in a language and what the rules are for combining those sounds into larger units. Phonology can also refer to the study of the sound systems of all languages, including universal rules of sound.

    The phoneme and the concept of significant differences in sounds

    Any sound used in speech can be called a phone or phonetic unit or segment. A phone is a unit of sound that can be mentally distinguished from other sounds in what is actually the continuous flow of sound that makes up speech. A phone can be described based on its articulatory, auditory, and acoustic characteristics. For example, [ph
  • Language, Culture, and Society
    eBook - ePub

    Language, Culture, and Society

    An Introduction to Linguistic Anthropology

    • James Stanlaw, Nobuko Adachi, Zdenek Salzmann(Authors)
    • 2018(Publication Date)
    • Routledge
      (Publisher)
    For example, each vowel is characterized by several resonance bands, referred to as formants, which represent the overtone structure of a vowel produced by the shape of the vocal tract. Because the position of the tongue changes with the production of different vowels, the formants vary correspondingly. Finally, auditory phonetics is the study of how speech sounds are perceived and interpreted by the various organs of the human body (ear, auditory nerves, and brain). We will focus on articulatory phonetics here, leaving the other two areas to physicists, neurologists, speech therapists, and other specialists. FROM PHONES TO PHONEMES Phones: The Smallest Unit of Sound The smallest perceptible discrete segment of speech is a phone, a speech sound considered a physical event. A succession of phones in a particular language makes up a stretch of speech, or an utterance. Each utterance is unique, occurring if not under different circumstances at least at a different time. Yet people do not respond to each instance of speech as though it were different from all others. Such utterances as “Where have you been?” or “I have no time just now” are treated as if they were much the same every time they are said, regardless of whether the voice belongs to a woman, man, or child, or happens to be clear or hoarse. Because there is so much likeness in what is objectively different, it is possible to represent speech sounds—phones—through the written symbols of a suitable phonetic alphabet. Linguistic anthropologists make phonetic transcriptions of words or utterances whenever they wish to obtain a sample of speech for subsequent analysis. Table 3.4 CHART OF THE MOST IMPORTANT MANNERS OF ARTICULATION (VERTICAL AXIS) AND POINTS OF ARTICULATION (HORIZONTAL AXIS) OF SELECTED CONSONANTS Note: white cells = phonemes used in English; dark gray cells = sounds common in English but not phonemic; light gray cells = not in English TABLE 3.5 THE CONSONANT PHONEMES IN AMERICAN
  • Understanding English Language Teaching in EFL Context
    eBook - ePub
    Chapter 5 . English Phonetics and phonology is one such area in which several various approaches are adapted and adopted by teachers as per their needs and the needs of their learners. An exploration of various such methods related to the dimensions of English language teaching (ELT) can contribute positively to the overall effectiveness of ELT in context. Besides, in the English as a foreign language (EFL) context, phonetics and phonology are much neglected areas; hence, conceptual understanding is at the initial stages. The present chapter adds the basic ideas with a view to balancing between the chapter being informative as well as pointing out some teaching strategies for phonetics and phonology. The information on phonetics and phonology and their teaching strategies is not new; however, the chapter would help English-language teachers to get some hints on how they can use the suggested strategies in their teaching practice. Hence, the expected objectives for the chapter reading would be as follows.

    Chapter Objectives:

    • Readers should be able to;
      • Develop an understanding and practice of English speech sounds and their production mechanism.
      • Apply the knowledge of phonetics and phonology in their ELT lessons to improve the four skills in the English language.
    Ladfoged (2014 ) defined phonetics as “The study of sounds and their physiological production and acoustic qualities” (Para-1). Kohler (2000 ) defined it as “the study of the spoken medium of language” (p. 1). Phonetics, in other words, is concerned with the mechanics of producing various sounds, which are used in human language. Phonetics is a vast science and thus has been categorized as a specialized field of knowledge within linguistics. In a general sense, phonetics includes three major types found in the ELT literature: Articulatory phonetics, acoustic phonetics, and perceptual phonetics. A brief definition of each will be presented here at the very beginning:
    1. Articulatory phonetics is the study of the way speech sounds are produced through the use of the mouth and peripheral organs.
    2. Acoustic phonetics is the study of how speech sounds are studied concerning their qualities.
    3. Perceptual phonetics is the study of how speech sounds are processed in the mind or perceived.
    The study of phonetics guides the learners on how to properly articulate individual sounds (phonemes). It enables the learner to produce a set of phonemes together (in form of words). It gives the base for setting standardized ways of producing language sounds. This not only helps in standardization but also helps in better communication in speech form. Sound pronunciation, correct spelling, and word recognition are the three key areas of language learning which can be taught and improved with the help of phonetic knowledge.
  • Empirical Approaches to the Phonological Structure of Words
    • Christiane Ulbrich, Alexander Werth, Richard Wiese, Christiane Ulbrich, Alexander Werth, Richard Wiese(Authors)
    • 2018(Publication Date)
    • De Gruyter
      (Publisher)
    In order to explain and model SLA, two opposing theoretical approaches currently compete. On the one hand, theories of universal grammar (UG) attempt to explain acquisition of a second (or any additional) language on the basis of underlying universal principles. The phonological system of a language user is based on abstract rules, constraints, or principles which are categorical and generalized across languages. On the other hand, usage-based approaches understand language and its regulations as a dynamic adaptive system (Ellis and Larsen-Freeman 2006). Only general cognitive functions and language use are the prerequisites for emergent categorisation and generalisations (Bybee and Hopper 2001). Language use is determined by exposure and frequency in the input. Thus, a comprehensive theory of SLA must supply a theoretical base allowing for integration of neuro-cognitive and environmental factors which engender linguistic behavior. However, it does not seem inconceivable that both approaches apply in the process of language learning, i.e., that language use is grounded in both abstract principles and input patterns (Moisik 2009; Ellis 2005).

    2 Phonotactics in L2 phonology

    During the first year of monolingual first language acquisition, a children’s ability to process and produce all possible speech sounds of any world’s language weakens because the L1 system develops according to the input of the ambient language. During acquisition and language mastery children become more and more constrained by the systematic organization of the L1, referred to as a phonological filter of the L1. Thus, perception is shaped by the L1 perceptual system (Strange 1995: 22 and 39). Furthermore, a number of studies support the application of a phonological filter in language processing. For instance, studies by Domahs et al. 2009; Berent and Lennertz 2010; Berent et al. 2014 show that monolingual individuals perceive and interpret words and nonce words on the basis of their native language.
    The implication for subsequent L2 acquisition is that the phonological rules of the L1 – including language-specific phonotactic regularities – influence processing and production of L2 phonology (see Boll-Avetisyan 2018, in this volume). The influence of the L1 is evidenced by a number of perception studies. Halle et al. (1998), for example, report perceptual assimilation in numerous tasks, in which speakers tend to mispronounce, substitute and adapt L2 consonant clusters illegal in their L1 to clusters which are legal in their L1.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.