Languages & Linguistics

Multimodality

Multimodality refers to the use of multiple modes of communication, such as language, images, gestures, and sound, to convey meaning. It emphasizes the interconnectedness of different modes and their combined impact on communication. In linguistics, multimodality is studied to understand how meaning is constructed and conveyed through various modes of expression.

Written by Perlego with AI-assistance

10 Key excerpts on "Multimodality"

  • Multimodality and Classroom Languaging Dynamics
    eBook - ePub

    Multimodality and Classroom Languaging Dynamics

    An Ecosocial Semiotic Perspective in Asian Contexts

    • Dan Shi(Author)
    • 2021(Publication Date)
    • Routledge
      (Publisher)
    Attention has been gradually diverted from the centrality of language to the impacts of other semiotic modes on meaning-making processes as different from the traditional assumptions of language communication. Multimodal approach comes into existence accordingly, tailored for the comprehension of how bodily movements, such as facial expression and gesture, are integrated with speech to function in the meaning-making procedure in social interaction. Multimodality, as an approach to the study of language in general, has been perceived as an application rather than a theory, entailing other methodological supports, which indicates that it can be examined from heterogeneous perspectives by the application of multidisciplinary research approaches (Bezemer & Jewitt, 2010). Social linguistic and social semiotic approaches to Multimodality are two typical perspectives where multimodal studies have enjoyed further development. From the social linguistic view, the mode of speaking or writing is primary, compared to other modes regarded as playing a supplementary role in the course of meaning construction (Bezemer & Jewitt, 2010). In contrast, from the perspective of social semiotics, it is presumed that modes with diversified types, which are endowed with equivalent status in functioning influenced by historical and socio-cultural evolution, are integral to the generation of meaning in human interaction (Bezemer & Jewitt, 2010 ; Kress, 2010). The social semiotic approach to Multimodality has been closely associated with Halliday’s systemic functional linguistics, designed to identify how ideational, interpersonal and textual meanings are constructed by different semiotic modes (Bezemer & Jewitt, 2010)
  • Literacy Theories for the Digital Age
    eBook - ePub

    Literacy Theories for the Digital Age

    Social, Critical, Multimodal, Spatial, Material and Sensory Lenses

    4 Multimodal Literacies A multimodal approach to technology-mediated learning offers a way of thinking about the relationship between semiotic resources and people’s meaning making (Jewitt, 2006: 16) Key Concepts of Multimodal Literacies Multimodal approaches to literacy are currently prolific in educational research, indicated by the steady increase in the number of research studies of digitally mediated literacy practices (Mills, 2009). The popular terms ‘multimodal’ and ‘literacy’ now appear together in over 30,000 scholarly texts accessible to Google Scholar, though definitions of Multimodality are not always alike. The dominant theory of Multimodality addressed in this chapter is positioned within the theoretical framework of social semiotics (Kress & van Leeuwen, 2006). Social semiotics explicitly attends to meaning-making of diverse kinds, whether of words, actions, images, somatic meanings or other modes (Thibault, 1993). Therefore, by definition, social semiotics acknowledges the role of non-linguistic modes in human social meaning. Multimodality is defined as ‘... the use of several semiotic modes in the design of a semiotic product or event’ (Kress et al., 2001: 20). Reading and writing have always been multimodal, since these literate practices involve the decoding and encoding of words, while similarly attending to spatial layout of the text, images and other modes of representation (e.g. gestural meanings of represented characters, material features of the book). Yet undeniably, people-driven technological developments of communication have given rise to a much more diverse range of texts and textual practices
  • Introducing Multimodality
    • Carey Jewitt, Jeff Bezemer, Kay O'Halloran(Authors)
    • 2016(Publication Date)
    • Routledge
      (Publisher)
    Chapter 2

    Why engage with Multimodality?

    Introduction

    In the previous chapter, we formulated three key premises of Multimodality. We highlighted that Multimodality does not just mean that recognition is given to the fact that people use a number of different semiotic resources. Multimodality also means recognition of the differences among different semiotic resources and of the ways in which they are combined in actual instances of meaning making. We then also pointed to the methodological implication of these premises, namely the need to attend to multimodal wholes.
    In this chapter, we will elaborate on these key premises by engaging with and challenging assumptions about language widely held among those studying language and the general public. We assume that few people will disagree with the view that language is one among a number of quite different sets of resources that humans have developed to make meaning. What we will focus on instead are the following two more contentious issues:
    • Language is the most resourceful, important and widely used of all modes.
    • Language can be studied in isolation.
    The counterarguments we will present are derived from the approaches to studying Multimodality that are central to this book. That means that you will not find arguments here that come from psychological and ethnological studies, suggesting, for example, that ‘50 per cent of communication is body language’. The focus is on the detailed study of empirical traces of meaning making using analytical procedures from disciplines that originally focused on the study of language in use. Our arguments have both theoretical and methodological ramifications: they challenge claims about the place of language in the social world as well as claims about how insight can be gained in language.
    Before we present these arguments, we need to clarify what we mean by language and mode or semiotic resource. By ‘language’ we mean speech and writing. By ‘mode’ and ‘semiotic resource’, we mean, for the moment, a set of resources, shaped over time by socially and culturally organized communities, for making meaning. That means that we break with the practice of naming all means of making meaning ‘language’, prefixed with such terms as ‘body’ or ‘sign’ or ‘non-verbal’ or ‘visual’. We argue that language is language and that the range of resources subsumed under ‘body language’ – gesture, gaze – are in fact distinctly different modes, each significantly different from language, and therefore demand separate terms. Indeed, some Multimodality scholars have proposed to treat speech and writing as separate modes, for they constitute sets of resources that are only partially overlapping. (Others conceptualize language differently. For instance, in systemic functional linguistics, language is conceptualized as a semiotic resource, and spoken and written language is seen as variations in language use.) It is important to reiterate here that we use this terminology as a means of synthesizing different approaches to Multimodality; it has not (yet) been universally adopted!
  • Multimodal Teaching and Learning
    eBook - ePub

    Multimodal Teaching and Learning

    The Rhetorics of the Science Classroom

    • Gunther Kress, Carey Jewitt, Jon Ogborn, Tsatsarelis Charalampos(Authors)
    • 2014(Publication Date)
    modes. All modes make meanings differently, and the meanings made are not always available to or understood by all readers. Second, the meaning of language-as-speech or language-as-writing, as of all other modes, is always interwoven with the meanings made by all the other modes simultaneously present and operating in the communicative context, and this interaction itself produces meaning. Third, what can be considered a communicative mode is more or less always open – systems of meaning are fluid, modes of communication develop and change in response to the communicative needs of society, new modes are created, existing modes are transformed.
    Moreover the question of whether X is a mode or not is a question specific to a particular community. As laypersons we may regard visual image to be a mode, while a professional photographer will say that photography has rules and practices, elements and materiality quite distinct from that of painting, and that the two are distinct modes. It is unproductive to enter into general debates on this outside of the quite specific contexts of social groups and their semiotic practices. The point is important here because, as we suggested in the previous chapter, the community of science (educators) may well have developed modes which are not recognized as such outside that community. Learning science is then in one part learning to recognize the modes of that community.
    Halliday’s (1985) social theory of communication provided the starting point for our exploration of multimodal teaching and learning in the science classroom. He argues that in verbal interactions with others we have at our disposal networks of options (or sets of semiotic alternatives) which are realized through sets of options of the semantic system. The elements of the semantic system of language are differentiated so as to reflect the social function of the utterance as representation, interaction and message, and are realized in language by the lexico-grammar. The principle underpinning this model is that language is as it is because of the functions (meanings) it has evolved to realize. It is organized to function with respect to social interests and demands placed on it by those who use it in their social lives. In this way language can be understood as the cultural shaping of a medium (sound in the case of language-as-speech) into regular forms for representation (grammar) in which it becomes the material resource (as signifiers) for meanings in the constant new making of signs.
  • The Discourse of YouTube
    eBook - ePub

    The Discourse of YouTube

    Multimodal Text in a Global Context

    • Phil Benson(Author)
    • 2016(Publication Date)
    • Routledge
      (Publisher)
    Semiotic systems also typically have syntax, which refers here to the ways in which complex meanings are made through combinations of signs. The syntax of speech entails linear progression over time, while writing entails linear progression in space. Less complex semiotic systems, such as the pedestrian crossing system described above, also have linear syntax. Green and red lights appear in sequence, and flashing lights (or ‘on-off’ linear sequences) may signify intermediate states. In this sense, a red light has the more complex meaning of ‘do not cross until the light changes to green’. Non-linear modes of visual imagery, such as painting or photography, lack linear syntax, but they may, nevertheless, have a ‘grammar’ of sorts represented by conventional reading pathways, although these are generally preferred rather than prescriptive (Kress and van Leeuwen, 2001). When a multimodal text, such as a web page, incorporates both linear and non-linear modes, the text as a whole is likely to have a non-linear character in the sense there is no prescribed order in which the different modal units should be read, although the page layout may suggest a preferred reading pathway, such as left to right and top to bottom.
    Multimodality refers to the use of multiple modes of expression in two senses. First, language is a multimodal system in that it has three distinct modes of expression in speech, writing and sign. Occasionally, these three modes are integrated into a multimodal text, such as a television broadcast of a speech, which is both subtitled and accompanied by simultaneous interpretation in sign. In a text of this kind, we have three modes, but one semiotic system. Second, communication is said to be multimodal when it involves interdependent semiotic systems. In face-to-face spoken interaction, for example, speech and gesture are interdependent. In a handwritten, printed or screen text, writing and layout are also interdependent. Gesture and layout are semiotically distinct from speech and writing, yet they also appear to be tied to them as features of spoken and written communication. Bezemer and Kress (2008) refer to the modes that accompany speech and writing, in this sense, as ‘modal resources’ rather than ‘modes’.
    Kress and van Leeuwen (2001: 21) describe modes as ‘semiotic resources which allow the simultaneous realization of discourses and types of (inter) action’. Kress (2014: 60) lists several examples of modes, including, speech, writing, image, music, soundtrack and layout. While it seems clear that modes differ from, and somehow stand between, semiotic systems and the media they exploit as raw materials, the question of what exactly is and is not a mode has proved intractable (Bateman, 2011; Kress, 2014; Matthiessen, 2007). The main problem, perhaps, lies in the difficulty of making clear-cut distinctions of the kind that can be drawn between language (semiotic system), speech (mode) and vocal sound (medium) when speaking of semiotic systems other than language. If mode refers to the interaction between a semiotic system and a medium, speech and writing are clearly modes; speech being a way of moulding vocal sounds to the purposes of language and writing playing the same role for language and graphic inscription. However, if image and music are also modes, as Kress suggests, it is difficult in each case to specify the corresponding semiotic system and medium. Image and music might be better described as semi-otic systems, each with its own distinctive modes of expression. Modes of expression of visual imagery might include drawing, painting and photography, each of which represents a distinct way of using graphic media to express visual meanings. This is also problematic, however, as drawing, painting and photography seem to be more akin to handwritten, printed and screen writing than they are to writing as a mode that can be realized using several different graphic media. Considered as a semiotic system, music has a similar history to language, with modes that use sound as a medium developing first and the graphic mode of scoring developing later. While analyses of these kinds can be productive, the essential problem remains that, for many semiotic systems, we lack terms that denote modes of expression with the clarity that speech and writing denote modes of linguistic expression. For lack of alternative terms, we are often forced to use terms such as ‘image’ and ‘music’ to refer to the modes that are integrated with speech or writing in multimodal texts, although we know that they do not exactly designate modes.
  • The Routledge Handbook of Linguistic Ethnography
    • Karin Tusting, Karin Tusting, Karin Tusting(Authors)
    • 2019(Publication Date)
    • Routledge
      (Publisher)
    10

    Multimodality

    Jeff Bezemer and Sahra Abdullahi

    Introduction

    In the past two decades, social scientists from different scholarly traditions, including sociolinguistics, conversation analysis and literacy studies, have started to describe an ever-growing range of different social-semiotic phenomena as instances of ‘Multimodality’. This is indicative of the emergence of a new, theoretically and methodologically diverse ‘field’, which is now often referred to as ‘Multimodality’ (MM). For general reviews of this field, the reader is referred to recently published secondary literature (see further reading). This chapter explores synergies and differences between MM and linguistic ethnography (LE). Our aim is to suggest ways in which theoretical and analytical tools from MM have been and might be brought to bear on and enrich (linguistic-)ethnographic accounts of social life. We demonstrate how two types of materials frequently obtained during ethnographic field work – video recordings and printed paper – can be analysed multimodally to advance understanding of meaning-making and communication in the contemporary social world.

    Historical perspectives

    The term ‘Multimodality’ was first introduced in the late 1990s, and is now widely used across the social sciences, including LE. MM has, over the years, become a diverse field. There is now a plethora of journals, both relatively new (e.g. Visual Communication, Multimodal Communication, Entanglements: Experiments in Multimodal Ethnography ) and well established (e.g. Social Semiotics, Journal of Pragmatics, Research on Language and Social Interaction ), that publish work that is described by its authors as ‘Multimodality’. General conferences (e.g. International Conference on Multimodality
  • Multimodality
    eBook - ePub
    • John Bateman, Janina Wildfeuer, Tuomo Hiippala(Authors)
    • 2017(Publication Date)
    The effects of visual mode, accent and attitude on EFL listening comprehension with authentic material (Modern Languages) – Integrated semantic processing of complex pictures and spoken sentences – Evidence from event-related potentials (Psychology) Interaction not competition
    The current lack of more inclusive accounts of the nature and operation of multimodal communicative activities dramatically compromises our abilities to deal with situations where Multimodality is in play, despite the fact that these situations are already frequent and, in many contexts, even constitute the norm. Attempts to move beyond the confines of individual disciplines or communities on the basis of those disciplines’ own approaches and techniques are commonly in danger of missing much of the complexity and sophistication of the areas they seek to incorporate.
    Multimodality in our view is inherently and intrinsically an interdisciplinary cooperative enterprise . Making this work is itself a major aspect of, and motivation for, the account.
    But the relevant model of interaction here can only be one of cooperation, not one of replacement or colonisation. We need to see the approach in this book as complementary to such bodies of knowledge. At the same time, however, we will need to go further and to place many of those tools and frameworks within the context of a broader view of ‘Multimodality’. Making such interactions maximally productive is then one last goal for what we consider essential for appropriate multimodal research and practice.

    1.3What this chapter was about: the ‘take-home message’

    In the final sections of our chapters we will often summarise the preceding arguments, offering the final message that needs to be taken away from the reading. In the present chapter we have seen that there is a huge diversity of interesting cases of apparent combinations of ways of communicating and making meanings. Challenging combinations have been with us for a long time but now, with the help of new media and new production and access technologies, they appear to be proliferating into every area of communication and signification.
  • Multimodality
    eBook - ePub

    Multimodality

    A Social Semiotic Approach to Contemporary Communication

    • Gunther Kress(Author)
    • 2009(Publication Date)
    • Routledge
      (Publisher)
    Halliday, 1978 ). Yet in a ‘disciplinary common-sense’ – if such a thing exists – it is not too far off the mark. Often this position is accompanied by an assumption that, in any case, images are organized like language, as witness terms such as ‘visual literacy’, ‘visual language’. End of story.
    Take the example in Figure 4.2 . In a social-semiotic multimodal account of meaning, all signs in all modes are meaningful. Linguistic theory can tell us very little as far as the sign-complex of Figure 4.2 is concerned. It can make a comment on the written caption; and nothing beyond that. The ‘core’ of this sign-complex is beyond the reach of any linguistic theory. A social-semiotic theory attends to general principles of representation: to modes, means and arrangements. It can, for instance, ‘say’ something about meaning relations and their instantiation in image through the spatial arrangement of visual elements; it can elucidate a syntax of this visual representation – the meaning-potential of spatial orientation of the players standing face-to-face; the use of colour as an ideational resource, to identify the teams; of proximity, that is, the use of distance as a meaning-resource; about affect (realized by the distance at which the players stand); their facial expression; down, literally, to the respective size and the prominence of the studs on their boots. Should we want to use such dichotomizing terms, we could say that affect and cognition are equally and simultaneously present; though much more significantly, social-semiotic theory allows us to challenge that dichotomy, via this example for instance.
    This is a multimodal text; the modes in use are writing, image, number, colour (and facial expression). Social Semiotics is able to say something about the function of each of the modes in this multimodal text; about the relation of these modes to each other; and about the main entities in this text.
    To summarize: linguistics provides a description of forms, of their occurrence and of the relations between them. Pragmatics – and many forms of sociolinguistics – tells us about social circumstances, about participants and the environments of use and likely effects. Social semiotics and the multimodal dimension of the theory, tell us about interest and agency; about meaning(-making); about processes of sign-making in social environments; about the resources for making meaning and their respective potentials as signifiers in the making of signs-as-metaphors; about the meaning potentials of cultural/semiotic forms. The theory can describe and analyse all signs in all modes
  • Technology, Literacy, Learning
    eBook - ePub

    Technology, Literacy, Learning

    A Multimodal Approach

    • Carey Jewitt(Author)
    • 2012(Publication Date)
    • Routledge
      (Publisher)
    multimodal design (New London Group, 1996; Kress and van Leeuwen, 2001) reflects the ‘reality’ of meaning making. Modes are fully integrated, there is not, and never has been, a purely linguistic text as writing is itself multimodal. What there has been is an educational focus on language – a privileging of language over other modes. Other modes have always been present but not always attended to (especially within educational research). To talk of multimodal design is to attend to all that is going on, including the visual character of writing (font, layout, colour), to listen to the ‘breathiness’, the tone, the pitch and the voice quality of speech and to understand these as part of meaning making. Conceptions of literacy need to be expanded beyond language to all modes. The static notion of literacy as the acquisition of sets of competencies can be replaced with a notion of literacy as a dynamic process through which students use and transform multimodal signs and design new meanings.
    In the domain of the screen, in which language is beginning to be de-centred and image and other modes are increasingly foregrounded, the traditional conceptions of learning and ‘literacy’ as entirely language based are inadequate. I want to claim ‘literacy’ as a term for design to describe the work of students with these resources. It is from this position that I argue that ‘literacy’ in the context of technology-mediated learning is multimodal. At the same time, I recognise the importance of attending to the specificity of linguistic, visual and other modes and their specific affordances in order to theorise how different modes contribute to the construction of curriculum knowledge and learning.
    Rethinking reading and literacy
    In this chapter I have shown that the multimodal character of screen makes a range of ‘non-linguistic’ resources available in ‘new’ and significant ways for reading and literacy. This is particularly so for the relationship of image and word. Through my discussion of illustrative examples of technology-mediated learning I have shown that the role of writing is changing both page and screen in ways that unsettle and de-centre the predominance of word. All of this has implications for the process of reading.
    Image is increasingly the main representational and communicational mode on screen, to the extent that when writing does dominate the screen it appears as a sign of convention or tradition. Writing is then one part, and not always a prominent part, of a multimodal ensemble of image, music, speech and moving elements on the screen. Where writing does feature on screen, it is often restricted to the naming and labelling of elements. In the multimodal environment of the screen, writing and speech can also be used more subtly in a game to shape the identity of a character — to highlight the role of language as a marker of identity, ‘belonging’ and ‘difference’.
  • A Semiotics of Multimodality and Signification in the Divine Comedy
    • Raffaele De Benedictis(Author)
    • 2023(Publication Date)
    • Routledge
      (Publisher)
    2     The Semiotics of Multimodality in Discourse
    DOI: 10.4324/9781003397298-2
    This Chapter has been made available under a CC-BY-NC-ND 4.0 license.

    2.1 General overview of a semiotics of discourse

    The term “discourse” has been employed in different ways by scholars and characterized by the specificity of the discipline in which it is used. In this study, I intend to define and utilize discourse as the single, individual act of communication that attempts to explain its internal dynamic process, which mediates between the intention of the author embedded in the text1 , the intention of the text, and the several ways in which the receiver may orient a textual discourse. Based on this definition, discourse aims at discovering possible interpretive paths that the receiver wishes to find and validate in the text. By means of discourse, the receiver is able to generate such paths through the direct and unrepeatable personal engagement with the text. Thus, the receiver’s responsibility is to consider the intention of the author authorized by the text and, more importantly, to pay close attention to the intention of the text and how it behaves as a host of further intentions in order to guarantee ontological and epistemological dependability of the interpretive process.
    Here I propose a phenomenological model because meaning is generated by the interplay of linguistic and non-linguistic influences. In this instance, discursive semiotics can prove its efficacy as it steers the reader primarily toward “a general syntax of discursive operations” in that the “universe of signification” is seen as a “praxis rather than as a stable set of fixed forms.” (Fontanille 2006 , xx; see also Kristeva 1980 , 36) Based on this expectation, the semiotics of discourse takes into account the two Saussurian levels of language, namely langue (the language-system shared by a community of speakers) and parole (the individual speech act made possible by the language)2 that contribute to form an active interplay working toward the production of meaning. In such a way, the text comes alive and fulfils its principal signifying function which consists principally of examining the text as a type of process, as a dynamic mechanism that can be adequately analyzed in its manifold epistemic manifestations. As such, discourse is not what it would appear to be, that is, any type of reaction/intuition a receiver may get from the text, but it is rather the exercise of one’s competence vis-à-vis a peculiar text (in our case Dante’s text) and guided by the cultural, and encyclopedic competence that Dante’s oeuvre requires as a product of the Middle Ages.3 Through interpretation, the semiotician focuses on the signifying faculty of the text and on the arrangement of potential discursive paths which will eventually manifest themselves as possible new content levels. In the DC, the semiotics of multimodal discourse is primarily an endeavor to anatomize such a polyvocal process emerging from the employment of multiple interpretive tools in the dissemination of Dante’s poetry. In light of the complexity of the subject matter, the interpreter must, first of all, become familiar with how meaning is formed on the basis of a view that recognizes the language of the DC as phenomenal manifestations of multimodal signification and not simply as pre-assembled “textual facts” (Fontanille 2006 , 46). And only afterwards one may decide what
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.