Languages & Linguistics

Syntactic Structures

Syntactic structures refer to the arrangement of words and phrases to form grammatically correct sentences in a language. This concept is central to understanding how different languages organize and convey meaning through their grammar and syntax. By analyzing syntactic structures, linguists can gain insights into the underlying principles that govern language and communication.

Written by Perlego with AI-assistance

11 Key excerpts on "Syntactic Structures"

  • Linguistics for Translators
    • Ali Almanna, Juliane House(Authors)
    • 2023(Publication Date)
    • Routledge
      (Publisher)
    syntaxis’ means ‘arrangement’, thus referring to the way in which lexical items (i.e. words) are arranged in a given clause or sentence. To put this differently, syntax is the branch of linguistics that focuses on the study of the structures and information of sentences along with the relationship of their parts. In English, for instance, in a sentence of the following kind
    The teacher asked the students many questions yesterday.
    we have a ‘subject’ and ‘predicate’, to borrow terms from traditional grammar, as modelled below:
    What about your language or the languages that you are familiar with? Is any predicate marker used?
    Speakers of some creole languages, such as Tok Pisin, tend to use the letter ‘i’ as a predicate marker when the subject is a third person, as in:
    Now, let us compare the sentence ‘The teacher asked the students many questions yesterday’ with the following sentence which has the same structure to see if they have the same meaning or not.
    The students asked the teacher many questions yesterday.
    Although these two sentences have the same Syntactic Structures (Subject, Verb, Indirect Object, Direct Object, and Adverb of Time), they have different meanings. This is because each noun phrase (known as an ‘argument’ in semantics) fills a different semantic role (for more details, see the next chapter). To explain, let us identify the semantic role assigned to each noun phrase used in the previous sentences by raising the following questions:
  • Introductory Linguistics for Speech and Language Therapy Practice
    • Jan McAllister, James E. Miller(Authors)
    • 2013(Publication Date)
    • Wiley-Blackwell
      (Publisher)
    Before you embark on this chapter, it would be advisable to check that you have understood the material in Chapter 5, Parts of Speech, and Chapter 6, Word Structure. In Chapter 5, we were able to give only a preliminary account of parts of speech, because a full account depends on an understanding of sentence structure, which we will begin to address here. But you will need to have a basic grasp of parts of speech if you are to be able to understand what we say here. In Chapter 6, you should review the material about inflectional suffixes, because there is a close relationship between sentence structure and inflectional morphology.
    As we noted in Chapter 1, the level of linguistic description that is concerned with sentence structure is called syntax. Syntax (from Classical Greek sun ‘with, together’ and taksis ‘placing’) is the study of phrases and clauses, and the key concept, a very traditional one, is that of construction – something that is ‘built together’ out of different pieces. When we examine constructions, we pay attention to the kinds of bits that combine to build them, the order in which the bits are arranged and the links between the bits – between words in phrases, between phrases in clauses and between clauses in sentences. These basic structural relationships will be our focus in this chapter. In later chapters, we will extend these fundamental concepts to explain how more complex Syntactic Structures are formed and how these are used communicatively.
    7.1 Why do SLTs need this knowledge?
    A client who could not use syntax would only be able to produce absurdly limited messages. Contrast the child who can produce the isolated words pictures and book with the child who can say the pictures in the green book or Show me the pictures in the green book or even not the green book, the blue book – with the Gruffalo
  • Contrasting English and German Grammar
    In linguistics, it is not useful to simply consider every utterance ever made as part of the language to be described. People make mistakes (this issue is traditionally discussed under the key words competence versus performance). Also, there are varieties or dialects that we don’t simply want to lump together. Modern linguistics considers various data sources in order to find out what is and isn’t part of the language under investigation. Native speakers’ judgements on acceptability, appropriateness, and truth reveal their knowledge of language. In simple cases it may be enough to collect a few judgements (for example to establish that (1) is an acceptable English sentence: everyone would clearly agree). Often, the data are less clear. More sophisticated methods to collect data include judgements elicited, e.g., by a questionnaire study, controlled use of corpora, psycholinguistic experiments using reading or reaction times and many more. Methodologically, linguistics has made significant progress (and learned a lot from psychology) in recent years.
    The good news for us is that many of the phenomena discussed in this textbook are empirically so clear that advanced methods are unnecessary. Where they are not, we will point this out.

    2 . Goals and models in syntax and semantics

    A grammar in the linguistic sense has several components because your knowledge of language encompasses various aspects of language: its sound structure, the ways in which it can build new words, how it builds sentences, and the way it is interpreted. We are concerned here with the last two components: sentence structure and meaning, i.e., syntax and semantics .

    2.1. Syntax

    The task of syntax is to characterize the well-formed sentences of a language. Clearly, the ability to produce and recognize grammatical sentences is part of the knowledge that a speaker of the language has. Therefore, a grammar as understood by the linguist must capture this aspect of knowledge of language. Concentrating on English, the goal of syntax is to define a system of rules that describes all and only the well-formed sentences of English.

    Syntax

    The syntax component of the grammar defines a system of rules that describes the well-formed sentences of English.
    Let us dwell on this characterization for a moment. Why do we say ‘system of rules’ above? There are many ways in which one could try to describe all and only the well-formed sentences of English. A very simple way to do so, it seems, would be to list them.
    It turns out that a list is a very bad model of syntactic knowledge, for two reasons: (i) people know about novel sentences, and (ii) people know about infinitely many sentences. Let us look at these two facts in turn.
    Like all science, syntax begins with a description of the basic parts or ‘atoms’ that go into the model. The atoms or basic building blocks of sentences are words; we construct complex expressions (sentences) by assembling strings of words in the appropriate ways. A well-known experiment in psycholinguistics introduced a new “word”, wug. (The word was made up by psychologists, Jean Berko Gleason’s group, who designed the original experiment in Boston.) Children were confronted with the word and the picture in (9) below.
  • Meaning and Structure
    eBook - ePub

    Meaning and Structure

    Structuralism of (Post)Analytic Philosophers

    • Jaroslav Peregrin(Author)
    • 2017(Publication Date)
    • Routledge
      (Publisher)
    Clearly, seeing language as a part-whole system is tantamount to seeing some of its expressions as wholes having others as their parts, and hence to ascribe them certain structures. Nevertheless, in this context we have talked merely about one type of structure, a structure which can be called syntactic. We have not faced the need of any other kind of structure which would be reasonably conceivable as semantic or logical. The point is that semantics, from our viewpoint, is not a matter of an independent structure parallel to the syntactic structure, but rather a matter of the way the opposition between truth and falsity gets projected along the (syntactic) structure from wholes to parts. We could also say that from the Quino-Davidsonian viewpoint of the radical translator/interpreter we see the syntax as a matter of which sounds the speakers use, whereas we see semantics as a very different matter of how they use them. 210 It is true that if we take syntax to be an arbitrary reconstruction of language as a finitely generated part-whole system, then we can have many different ‘syntaxes’, some of which may be more suitable for the subsequent semantic analysis (the capturing of relevant inferences) than others. Moreover, it may turn out that some features or some elements of the syntactic structure of an expression are partly or wholly irrelevant from the viewpoint of its semantics; and thus only a part of the syntactic structure might appear as semantically relevant. This can lead us to talking about a semantic structure or about a logical form as something different from syntactic structure after all. However, construed in this way, this structure will not be independent of syntax – it will be merely a suitably ‘purified’ version of syntactic structure (or of one of the possible Syntactic Structures). It will be what results if we, in the words of Frege (1879, p.IV), “forego expressing anything that is without significance for the inferential sequence”
  • Language, Mind, and Brain
    • T. W. Simon, R. J. Scholes, T. W. Simon, R. J. Scholes(Authors)
    • 2019(Publication Date)
    • Psychology Press
      (Publisher)
    Given this view, it appears that the role of transformational rules is to specify the relationships between the structures that occur in actuality (surface structures) and the potential structures generated by the phrase structure rules (deep structures) and to define co-occurrence restrictions. No set of structures need specify the sequence of elements for which structural relationships obtain.
    SYNTAX AS ORDER AND ORDER AS SYNTAX
    Our academic training as well as, apparently, our initial intuitions make it very difficult for us to deal with a claim that syntax and ordering are independent. Nearly everyone who has reacted to the ideas presented in this chapter has pointed out that English does employ order to convey syntactic distinctions (while, presumably, certain other languages, like languages employing case markers, don’t). While this reaction is no doubt a considered and educated one, it is, I believe, at best limited and at worst wrong. To illustrate the alternative view under consideration here, we posit two distinct conditions about semantics and ordering and semantics and syntax:
    1.  If you want to express X, the order of the elements of X must be in the sequence a + b + c + d. 2.  If you want to express X, the syntactic relations must be of the type A, B, C, D.
    On the first condition, if you want to say that some unnamed male individual moved in such a way that the original position was in closer approximation to the speaker than the second position, you must say “he went from here to there.” On the second condition, if you want to construct a sentence capturing the intention above, you must have the syntactic structure indicated by:
    I would ask the reader to note that the second condition is quite independent of the first. Specifying the structure appropriate to the meaning can be done with any spatial arrangement whatsoever so long as the relationships are maintained. For example
    Consider that old classic case, active and passive sentences. For some reason which I now find unmotivated, we generally feel that ordering is syntax for active sentence but formatives are syntax for passives, i.e., that in Tom hit Bill it is the order that specifies the relations, but in Bill was hit by Tom
  • The Routledge Handbook of Asian Linguistics
    • Chris Shei, Saihong Li, Chris Shei, Saihong Li(Authors)
    • 2022(Publication Date)
    • Routledge
      (Publisher)
    PART II Syntactic Structures 6 WORD-ORDER VARIATIONS IN ASIAN LANGUAGES At the syntax-processing interface Jieun Kiaer DOI: 10.4324/9781003090205-9 Introduction 1 The contemporary linguistic framework regarding syntactic structure-building is deeply rooted in English and other Western-European language theory. 2 These languages exhibit rigid word orders, and data from these languages have been typically used as evidence for verb -driven theories of syntactic structure-building. Such theories overlook the cases of most Asian languages, which have flexible word ordering (with exceptions). These languages are considerably understudied and under-valued in contemporary theoretical linguistics, and modelling Asian languages on word order-based frameworks is ineffectual. In both synchronic and diachronic variations, we can easily see that rigid ordering is not so common in human languages, let alone Asian languages. Goddard (2005: 7) notes that generally speaking, the languages of East and Southeast Asia tend to have a more flexible and ‘expressive’ word order than English, and almost all languages in this region allow some variation in the constituent order of a simple sentence. Indeed, almost all languages exhibit flexibility in word orders to a certain extent. Even in languages known for rigid word orders, flexibility can be found; for example, in English there is great flexibility permitted between a direct object and an indirect object. In languages with flexible word orders, particles, prosody, and context play a crucial role in unfolding Syntactic Structures. Languages with highly flexible word orders have been described as ‘scrambling’ languages (Ross 1967), which suggests that these languages have a basic word order fundamental to their sentence structure from which alternative, ‘scrambled’ word orders are realised
  • Aspects of Language Production
    Chapter Twelve Conceptual structures in language production
    Mark Smith
    School of Psychology, University of Birmingham, UK
    What can the grammatical structure of a sentence, its lexemes, morphemes, syntax, and so on, tell us about the conceptual structure that underlies that sentence? Historically, there have always been two distinct approaches to this question—the empiricist and the rationalist approach. According to the empiricist, conceptual information is derived directly from an individual’s experience of the world and as a result comes to inherit the idiosyncratic, indeterminate, analogue complexity of that experience. As such the empiricist is forced to deny that there can be any precise and comprehensive replication of conceptual information in the fixed, shared, digital structure of language. Instead, the empiricist envisions that grammatical structures constitute a recasting of conceptual structures into a simplified and impersonal form. In contrast, the rationalist argues that conceptual information has a transcendental origin and, consequently, an elegant structure which is impersonal, immutable, and universal in nature and which can be imposed on to the complex world around us thereby rendering it comprehensible. Consequently, the rationalist is able to argue that there is a natural affinity between conceptual and grammatical information with the digital structure of a sentence providing a faithful mirror of the order inherent in the concepts that underlie it.
    In this chapter, the relation between conceptual and grammatical structures is considered in the context of speech production. Contemporary accounts of the relation between conceptual and grammatical structures in speech are wholly rationalist in outlook (Garrett, 1982; Levelt, Roelofs, & Meyer, 1999) and the essay begins by questioning the theoretical coherence of these accounts. In the next section of the essay, a diverse body of empirical data is presented that undermines the rationalist claim that conceptual and grammatical structures precisely replicate each other. In the third section, it is demonstrated that the speaker produces grammatical structures which simplify conceptual structures in order to reduce processing effort. In the fourth section, it is argued that the informational complexity of conceptual structures is such that they cannot be fully preserved in the simple, digital code of language. These four sections are drawn together in the final section of the chapter which concludes in favour of the empiricist view of the relation between conceptual and grammatical structures.
  • The Concise New Makers of Modern Culture
    • Justin Wintle(Author)
    • 2008(Publication Date)
    • Routledge
      (Publisher)
    Aspects model, is that they should not affect the meaning of a sentence, only its ‘surface structure’. A structurally ambiguous sentence, like ‘flying planes can be dangerous’ would therefore be assigned two quite different deep structures; it is the operation of the transformational rules which produces a common surface structure. When a sentence has been processed by the syntactic component the surface structure is converted into a sequence of speech sounds by a series of phonological rules. The syntactic component is central to Chomsky’s model, with the semantic and phonological components playing interpretative roles. The precise relationship between the various components has given rise to much controversy, even within the ‘generativist’ camp. Chomsky himself has conceded that the surface structure of a sentence can partly determine its meaning, while some have argued that surface structures can be assigned semantic interpretations without the need for an intervening level of deep structure. Yet another breakaway group have proposed a theory of ‘generative semantics’ in which there is no clear boundary between syntax and semantics. Despite the disagreement surrounding the correct formulation of the theory, generative linguistics share the same basic assumptions about language and have succeeded in demonstrating beyond doubt the innateness and complexity of the language faculty.
    Chomsky’s work in linguistics has profound implications for our understanding of humanity, which Chomsky himself has been assiduous to exploit. In particular his concept of a ‘grammar’ shared by all underwrites the notion of universal equality, while what people do with their innate linguistic abilities reinforces notions of individual autonomy. Referring to himself variously as a ‘liberal anarchist’ and as an ‘anarchic socialist’, across five decades Chomsky has sustained a scathing polemics against aspects of American foreign policy and against the ‘corporatist’ state, sometimes lampooned by him as all but synonymous with the ‘industrial-military complex’.
    Chomsky has claimed that at the age of ten he wrote an essay about the threat of fascism, following the fall of Barcelona to Franco’s nationalist forces during the Spanish Civil War. In 1967 he achieved celebrity with ‘The Responsibility of Intellectuals’, a keynote attack on America’s involvement in Vietnam published in the New York Review of Books. Since then he has regularly challenged received definitions of ‘terrorism’, claiming that more often than not such violence is the product of particularly American intervention in the affairs of smaller, weaker nations by stronger, mainly Western powers. A persistent critic of both the Israeli and Saudi Arabian states, he has voiced support for the Palestinian cause. In the 1970s he challenged Western media coverage of alleged human rights abuses in
    Mao Zedong’s
  • Mind, Brain, and Language
    eBook - ePub

    Mind, Brain, and Language

    Multidisciplinary Perspectives

    • Marie T. Banich, Molly Mack(Authors)
    • 2003(Publication Date)
    • Psychology Press
      (Publisher)
    derivation.A derivation implicitly represents a grammatical structure for the derived word sequence. On the second interpretation, rules are templates for directly checking that certain linguistic structures are grammatical.
    The simplest way to see how such grammars describe syntactic structure is to examine a grammar for a simple invented language. In (2) we give a complete grammar for the syntax of an invented language called Little-English.Little English expressions are like certain English expressions, but Little-English has only six words (boy, girl, met, that, thinks,and this). The categories of Little-English are represented by the labels listed in (1).
    1. Lexical categories N noun Det determinative
      Vc clausally complemented verb (that is. a verbsuch as thinkwhich may be accompanied by a tensed subordinate clause)
      Vt transitive verb (that is, a verb such as metwhich may be accompanied by a direct object NP]
      Phrasal categories NP noun phrase
      VPverb phrase
      S sentence or clause (a sentence isbasically a clause standing
      alone)
    The rules are stated in the form “a >P”, where a is the name of some syntactic category (lexical or phrasal) from the list in (1) and Bis either a word or a sequence of exactly two category symbols. The entire syntax for Little-English is stated in (2).
    (2) A grammar for Little-English
    a. S→NP VP f. Det→that
    b. NP→Det N g. N→boy
    c. VP→Vt NP
    h. N→girl
    d. VP→VC S
    i. Vt →met
    e. Det→this
    j. VC →thinks
    Each formula in (2) is a grammatical rule. Rule (2a), for example, says that a clause consists of a noun phrase followed by a verb phrase (these are the traditional “subject” and “predicate” of the clause). Rule (2c) says that a verb phrase of LittleEnglish may consist of a transitive verb followed by a noun phrase. Rule (2h) says that “girl” is a member of the lexical category of nouns in Little-English.
  • Psycholinguistics (PLE: Psycholinguistics)
    Unlike Winograd’s writings, many other discussions in AI that purport to be about interaction between syntactic and semantic processing are largely about formal autonomy. For example, in the language understanding programs written by Schank and his colleagues (see Schank, 1975) the definitions of structural categories make use of semantic concepts, and therefore violate Chomsky’s principle of formal autonomy. Schank refers to this aspect of his programs by saying that comprehension is ‘semantics-driven’. This description suggests that semantic analysis guides parsing. Indeed, Schank claims that syntactic analysis is not always necessary.
    The rules of French grammar are not crucial in understanding French. (1975, 12)
    On these points he is the victim of the same confusion between syntax and semantics as the Generative Semanticists (see chapter 2 ). A structural (that is syntactic) analysis of a sentence is easily confused with a semantic one if the grammar violates the formal autonomy principle. A ‘semantic’ grammar may have syntactic categories such as AGENT and LOCATION, with the result that the syntactic structure of a sentence displays its meaning transparently and can be mistaken for a semantic representation. It then appears that the sentence has been interpreted without recourse to syntactic analysis. However, the interpretation is performed by the person looking at what is a structural (syntactic) analysis of the sentence. The occurrence of a symbol such as AGENT (rather than, say, NP) in a phrase marker does not determine what that symbol means. Another reason why analysis by a semantic grammar seems to bypass syntax is that a semantic grammar for analysing discourse about a specific domain of knowledge is comparatively simple (see for example, Bruce, 1982, who presents a grammar for analysing questions about business trips), and ‘semantic’ phrase markers have little structure. The major problem with simple semantic grammars is that they do not generalize to other knowledge domains.
    From Schank’s descriptions of his programs it is impossible to tell whether syntactic and semantic processors interact. The reason is that Schank does assume the structural descriptions constructed by his program to be semantically transparent, and no proper semantic
  • Learning to Write
    eBook - ePub
    • Gunther Kress(Author)
    • 2003(Publication Date)
    • Routledge
      (Publisher)
    As far as the former is concerned, the arguments are clearly presented by Feldman and Toulmin (1975). They point out that the fact that data can be described in terms of formal theoretical systems is not sufficient justification for assuming that these descriptions capture anything more than the theory’s own formalism. It is not sufficient evidence for assuming that the formalism corresponds to any cognitive structure. The same point is made in a slightly different way in a recent article by Patricia Smith Churchland: ‘There is no doubt that some of the information bearing states of the central nervous system are not a species of sentential attitude; that is, they are not describable in terms of the person’s being in a certain functional state whose structures and elements are isomorphic to the structure and elements of sentences. Obviously, for example, the results of information processing in the retina cannot be described as the person’s believing that p, or thinking that p, or thinking that he sees an x, or anything of the sort.’ Despite Feldman and Toulmin’s appropriate caution, three points need to be made: (1) If a number of descriptions, arising out of significantly different theories, produce evidence of similar structures inherent in such descriptions they will be to that extent less arbitrary and dependent merely on the vagaries of one theory. In other words we are likely to be closer to the description of reality at some level. (2) Though linguistic structures derived from such descriptions will give no indication about cognitive structures as such, they do point to a level of organization at some stage in a chain from cognitive to linguistic structure which gives rise to such structures. In other words, while we cannot make any direct inferences about the neurophysiological organization corresponding to given cognitive states, we can say that at some level there is a cognitive organization such that it produces linguistic materials which display given structures. (3) Given the point made earlier concerning the significance of certain types of linguistic materials, (2) above means that we can make legitimate inferences about the nature and content of the level of cognitive organization mentioned under (2).
    The second point is equally important, and certainly less acknowledged than the former. As I have stressed in preceding chapters, speech and writing are two distinct modes of language, each with its linguistic (phonetic/graphic, syntactic and textual) characteristics. When children arrive at school they are in many respects competent speakers; they are not, however, competent writers. Competence in speaking and competence in writing have not been treated as distinct competencies. Children’s written language has consequently been treated unproblematically as ‘children’s language’, with unfortunate consequences. For instance, in his important monograph Syntactic Maturity in Schoolchildren and Adults (1970), Kellogg Hunt discusses speech as distinct from writing, only to collapse them into each other in terms of applying the same descriptive categories to both, that is, T-unit length, clauses per T-unit, clause-length, etc. Similarly in discussing results of analyses: ‘On the basis of Hunt’s and O’Donnell’s studies, and in the absence so far of contradictory data, there is evidence that throughout the school years, from kindergarten to graduation, children learn to use a larger and larger number of sentence-containing transformations per main clause in their writing…. In schoolchildren’s speech the same tendency appears to exist up to the seventh grade, and future investigators may find that the tendency continues through the later grades’ (Kellogg Hunt (1970, p. 9); my italics). What Hunt’s data do seem to point to is that over time the structure of written English begins to influence the structure of spoken English—as one would expect: ‘The number of clauses per T-unit for speech did not increase at every interval; nonetheless the number did increase, though in a zig-zag upward path. The values for kindergarten, and grades 1, 2, 3, 5 and 7 were, respectively, 1.16, 1.19, 1.18, 1.21, 1.19, 1.26’ (pp. 8–9). To put a gloss on Hunt’s findings: the syntax of speech is marked by a relative preponderance of chaining constructions, while the syntax of writing is marked by a relatively greater degree of embedding constructions. My hypothesis is, as I attempted to show in Chapter 3
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.