The Routledge Handbook of Research Methods in Applied Linguistics
eBook - ePub

The Routledge Handbook of Research Methods in Applied Linguistics

  1. 530 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Routledge Handbook of Research Methods in Applied Linguistics

Book details
Book preview
Table of contents
Citations

About This Book

The Routledge Handbook of Research Methods in Applied Linguistics provides a critical survey of the methodological concepts, designs, instruments and types of analysis that are used within the broad field of applied linguistics. With more than 40 chapters written by leading and emerging scholars, this book problematizes and theorizes applied linguistics research, incorporating numerous multifaceted methodological considerations and pointing to the future of good practice in research. Topics covered include:



  • key concepts and constructs in research methodology, such as sampling strategies and mixed methods research;


  • research designs such as experimental research, case study research, and action research;


  • data collection methods, from questionnaires and interviews to think-aloud protocols and data elicitation tasks;


  • data analysis methods, such as use of R, inferential statistical analysis, and qualitative content analysis;


  • current considerations in applied linguistics research, such as a need for transparency and greater incorporation of multilingualism in research; and


  • recent innovations in research methods related to multimodality, eye-tracking, and advances in quantitative methods.

The Routledge Handbook of Research Methods in Applied Linguistics is key reading for both experienced and novice researchers in Applied Linguistics as well as anyone undertaking study in this area.

Frequently asked questions

Simply head over to the account section in settings and click on ā€œCancel Subscriptionā€ - itā€™s as simple as that. After you cancel, your membership will stay active for the remainder of the time youā€™ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlegoā€™s features. The only differences are the price and subscription period: With the annual plan youā€™ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weā€™ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access The Routledge Handbook of Research Methods in Applied Linguistics by Jim McKinley, Heath Rose, Jim McKinley,Heath Rose in PDF and/or ePUB format, as well as other popular books in Education & Education General. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2019
ISBN
9781000734171
Edition
1

Part I
Key concepts and current considerations

1
Methodological transparency and its consequences for the quality and scope of research

Emma Marsden

Introduction

Methodological transparency can involve all aspects of the research process, from initial design, through peer review, to dissemination of findings. It means making the research process fully transparent so that reviewers and readers can understand exactly what the researchers did to elicit, analyse, and understand their data; that is, how they moved from their research aims to data to findings to interpretation. As a key component of the open science movement, transparent methods are being (and have the potential to be) adopted to different extents across the subdomains within the broad field of applied linguistics. In this chapter, I describe the different practices that are required to make our research methodologically transparent and what the driving forces are behind these practices. I examine the extent to which methodological transparency is already established in the field and sub-fields of applied linguistics. I illustrate some of the costs of limited transparency by drawing on sobering examples from research into second language learning and teaching. I then describe some of the key benefits of working towards increased methodological openness and clarity. I go on to highlight some of the key developments and initiatives that can help us to improve the transparency of our methods to improve the quantity, quality, scope, and usefulness of applied linguistics research, whilst also acknowledging the challenges that this involves.
It is important to distinguish between at least two interpretations of ā€˜transparencyā€™. One ā€“ the ā€˜soft versionā€™ ā€“ involves making our research process fully available for other researchers, including reviewers, editors, and readers in the academic community who have access to journal articles and books that are usually behind paywalls. A second, the ā€˜strong versionā€™, involves making our research process fully open to everyone ā€“ that is, transparent to those who are beyond the paywalls of the academe, by adopting open science practices and providing materials, data, and publications that are available at no cost at the point of access. In this chapter, I will attempt to identify which of these interpretations I refer to at different points, but the two are closely intertwined and inevitably merge into one another. (For an account of the broader notion of ā€˜open scienceā€™, see Marsden, 2019.)
In terms of its scope ā€“ its relevance to different sub-domains and methods within applied linguistics ā€“ the need for methodological transparency can apply to almost all of them. However, the current chapter largely relates to research into multilingual language learning and education (broadly defined), largely because those areas have been, to date, the focus of meta-science into methodological transparency. This focus is also partly due to the fact that concerns about, and changes in, practices have tended to be driven by researchers working in more quantitative or hypothesis-driven research, often a characteristic of those sub-domains. Nevertheless, most transparent practices can arguably be adopted and adapted within any sub-domain, from ethnographic approaches through to laboratory experiments. However, different designs, methods, and epistemologies (philosophies about the nature of knowledge) do incur different sets of purposes and challenges for methodological transparency, issues that are touched upon at various points in this chapter.

What is methodological transparency?

The life cycle of methodological transparency runs in tandem with the life cycle of a research project. It affects decisions right from the start of the research process (conceptualisation and design) through to its end (reporting the research findings). At the start of the research process, funding requests and planning must consider what resources and steps will be required to make it possible for others to understand and use materials, procedures, or data. For example, to make data available for full scrutiny, human participants must agree before data collection to have their data made available to more than just the researcher(s), and this must be approved via institutional ethics boards; data management plans must include details of where and how the data will be stored and the level of anonymity that is possible and necessary. To attain fully open research methods, participants are asked to agree to having their (anonymised) data made available in a public repository and held there indefinitely, as is increasingly being made a condition of funding by bodies across the world (see Marsden, Trofimovich, & Ellis, 2019; Trofimovich & Ellis, 2015 for some more information on these).
At the next stage of the research process, during the design of materials (such as protocols, stimuli, schedules, and tests) researchers must bear in mind that all of their materials will be scrutinised by reviewers and readers. Knowing this beforehand may affect decisions about the design process itself, akin to a ā€˜backwashā€™ effect of transparency on the research process. For example, at this point, researchers might ask themselves, how well should I pilot this material (questionnaire, interview protocol, oral production test)? What do I already know about the reliability of this instrument, in terms of either the data that it elicits (internal reliability) or the extent to which it can be coded or scored reliably by more than one person? Do I need to get others to check language used (for accuracy and appropriacy)? If I use images or videos, do I need the permission of others (e.g. authors, publishers, participants) so I can share these materials later? Do the materials alone allow others to carry out a similar study with the confidence that they could compare their findings to ours, or do I need to provide an additional document laying out a protocol or explaining our decisions (e.g. specific words said to participants, the layout of a room, operation of equipment, the way in which access to a context or participants was obtained, or the order in which events must flow)? What data do I need to collect so that I will be able to describe my participants fully, with demographic information such as age, proficiency, language background, and context? In short, can others evaluate the relevance of my research to their own context and, where appropriate, could my methods be replicated? The knowledge that actual research materials will be made fully available to reviewers and other researchers (or, in the case of open science, to all), probably forces us to consider our design more carefully than if our materials were only to be described or a small sample of them provided. Indeed, this backwash effect already has some empirical support in some disciplines (described briefly later in this chapter).
Once we have designed our instruments and collected our data, another set of decisions that affect transparency face us, about data preparation and data description. We will need to report if and how we cleaned our data (e.g. the level of anonymisation, removal of outliers, normalisation techniques), how we coded or scored our data, and how we analysed it. For quantitative research, certain details of reporting are necessary and increasingly expected by journals, such as descriptive statistics (means, sample sizes before and after data cleaning, standard deviations, confidence intervals), effect sizes and their confidence intervals, and instrument and rater reliability information (see Larson-Hall & Plonsky, 2015).
A closely related aspect of transparency involves making any further analysis process as visible as possible ā€“ how exactly were the raw data reduced to the format in which they are presented as ā€˜results.ā€™ For quantitative research, this means describing how, for example, percentages were calculated; which specific measures or indices were used (e.g. which eye movement measurements, of the many available; which regions/samples/extracts of language; which measures of complexity, accuracy or fluency were adopted); which criteria were used to select the analysis procedures; which assumptions underpinning certain statistical procedures were checked; which criteria were used to select factors for inclusion or interpretation (e.g. in exploratory factor analysis, structural equation modelling, latent growth curve analyses); which modelling procedure was adopted (e.g. for regression-based analyses). Increasingly, quantitative researchers can use open source software and provide their actual analysis code (see Larson-Hall & Mizumoto, this volume), allowing others to replicate the analysis exactly.
At this point, the reader might be thinking that all these issues simply constitute good practice ā€“ our research methods training and our peer review and editorial processes should take care of all this! The next section lays out how, in fact, the field of applied linguistics has not yet achieved transparency in many of the ways laid out here. The following section then illustrates how this situation affects our ability to understand, evaluate, and replicate research.

What is the state of play of methodological transparency in applied linguistics?

The extent of methodological transparency in our field, in its entirety, has not yet been systematically evaluated, but a burgeoning meta-science is systematically examining the fieldā€™s methodological and reporting practices. As Byrnes (2013) notes:
it appears that at this point in the development of applied linguistics, [methodological issues] demand a kind of professional scrutiny that goes directly to the core of what we do and what we know and what we can tell our publics that we know ā€“ and not only how we do it.
(p. 825)
This meta-science includes a growing number of systematic syntheses of particular methodological practices (such as research design and data elicitation or analysis techniques). It has served to highlight a severe lack of methodological transparency. In this section, I bring together some of this research to provide a short narrative account of the extent and nature of methodological transparency in terms of materials, data, and analysis. The picture about to be described is sobering, with few bright points. However, it is very important to note that there is no intention to criticise or ā€˜blameā€™ researchers, reviewers, or editors: changing expectations and standards are entirely inevitable as our research aims, cultures, and capacities shift in concert with evolving societal views and technological innovation. The aim here is simply to describe the situation to date and indicate the direction of change.

Materials transparency to date

As previously noted, authors can make their materials (stimuli, procedure, analysis protocols, code) and data (raw and/or coded or reduced at some level) available. Early steps in this direction were taken by individual researchers, in the absence of larger, more sustainable infrastructure and incentives: for example, vocabulary research materials have been available on Paul Mearaā€™s Lognostics site (www.lognostics.co.uk/) for many years, as have materials for research into language attrition on Monika Schmidtā€™s Language Attrition website (https://languageattrition.org/). More examples regularly emerge, such as Atsushi Mizumotoā€™s resources for analysis and natural language processing. As committed as these individuals are, repositories that are sustainable and community supported ā€“ such as Instruments for Research in Second Languages (IRIS; www.iris-database.org) or the Open Science Framework (OSF) ā€“ are now available and offer perhaps greater hope of sustainability, visibility, and reach across broader domains of research. For these reasons, such repositories (rather than individual or institutional platforms) are endorsed by the Center for Open Science, a large international philanthropic initiative established in 2011 to promote and facilitate open science practices across disciplines.
To facilitate methodological transparency in the domain of language learning, use and education, an initiative began in 2012 known as IRIS1 (Marsden, Mackey, & Plonsky, 2016; Marsden, Thompson, & Plonsky, 2017). This repository now holds more than 4,300 files of materials and analysis protocols, including 77 files of second language learning data. Among these materials are numerous examples of the methods covered by this handbook. IRIS offers a discipline-specific, highly searchable platform, hosting only materials and data from peer reviewed publications (including PhD theses). This is in contrast to the OSF, which also holds works in progress and nonā€“peer reviewed work. Enthusiasm for and engagement with IRIS can be seen in the approximately 36 journals that encourage its accepted authors to upload their materials to IRIS. In 2017, this practice was endorsed in the American Association for Applied Linguisticsā€™ Publication Guidelines, and by the British Association of Applied Linguistics.
The IRIS resource offers great potential and has received very active support from some quarters. But what proportion of articles have materials that are actually available on IRIS? An ongoing study is investigating this by examining how many data collection instruments could be available from the ten journals that have the most materials on IRIS, ranging from 2013, just after IRIS became live, to the end of 2018 (Marsden, Thompson, & LaFlair, in preparation). They have found that, in total over those five years, approximately just 13.6% of the articles that used data collection instruments have made some materials available on IRIS. The trajectory increased in the first few years, but seems to have plateaued at about 15% annually. This low proportion is despite the fact that 36 journals in applied linguistics report they routinely invite their authors to upload their materials upon acceptance of a manuscript.
The IRIS repository also has two special collections of materials that have been reviewed in published methodological syntheses: 62 self-paced reading tests (Marsden, Thompson, & Plonsky, 2018) and 110 acceptability judgement tests (Plonsky, Marsden, Crowther, Gass, & Spinner, 2019). Although this may seem impressive, in fact these special collections were gathered only after an intensive effort from the IRIS team to contact all the authors (where possible), to seek their instruments. Before this effort, Marsden et al. (2018) found only 4% of self-paced reading studies had openly available materials and 77% had only a brief example of stimuli available in their articles (not the full instrument). Similarly sobering is that for judgement tests ( JTs), the 110 materials that are now available still only represent just over one third of the total 385 JTs that were found by the authors for inclusion in their synthesis. Another indication of low levels of materials transparency was found by a synthesis of replication studies in which Marsden, Morgan-Short, Thompson, and Abugaber (2018) found very low levels of availability of materials in the initial...

Table of contents

  1. Cover
  2. Half Title
  3. Series
  4. Title
  5. Copyright
  6. Contents
  7. List of figures
  8. List of tables
  9. List of text box
  10. List of contributors
  11. Introduction: theorizing research methods in the ā€˜golden ageā€™ of applied linguistics research
  12. Part I Key concepts and current considerations
  13. Part II Designs and approaches to research
  14. Part III Data collection methods
  15. Part IV Data analysis
  16. Index