Academic Writing and Publishing
eBook - ePub

Academic Writing and Publishing

A Practical Handbook

James Hartley

Share book
  1. 198 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Academic Writing and Publishing

A Practical Handbook

James Hartley

Book details
Book preview
Table of contents
Citations

About This Book

Academic Writing and Publishing will show academics (mainly in the social sciences) how to write and publish research articles. Its aim is to supply examples and brief discussions of recent work in all aspects of the area in short, sharp chapters. It should serve as a handbook for postgraduates and lecturers new to publishing. The book is written in a readable and lively personal style. The advice given is direct and based on up-to-date research that goes beyond that given in current textbooks. For example, the chapter on titles lists different kinds of titles and their purposes not discussed in other texts. The chapter on abstracts instructs the reader on writing structured abstracts from the start.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Academic Writing and Publishing an online PDF/ePUB?
Yes, you can access Academic Writing and Publishing by James Hartley in PDF and/or ePUB format, as well as other popular books in Bildung & Bildung Allgemein. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2008
ISBN
9781134053650
Edition
1
Topic
Bildung

Section 1
Introduction

Chapter 1.1
The nature of academic writing

Anyone who wishes to become a good writer should endeavour, before he allows himself to be tempted by the more showy qualities, to be direct, simple, brief, vigorous, and lucid.
(Fowler & Fowler, 1906, p. 11)

THE LANGUAGE OF SCIENCE AND ACADEMIA

If we examine the text of scientific articles it is clear that there is a generally accepted way of writing them. Scientific text is precise, impersonal and objective. It typically uses the third person, the passive tense, complex terminology, and various footnoting and referencing systems.
Such matters are important when it comes to learning how to write scientific articles. Consider, for example, the following advice:
Good scientific writing is characterised by objectivity. This means that a paper must present a balanced discussion of a range of views 
 Moreover, value judgements, which involve moral beliefs of what is ‘right’ or ‘wrong’ must be avoided 
 The use of personal pronouns is unnecessary, and can lead to biases or unsupported assumptions. In scientific papers, therefore, personal pronouns should not be used. When you write a paper, unless you attribute an opinion to someone else, it is understood to be your own. Phrases such as ‘in my opinion’ or ‘I think,’ therefore, are superfluous and a waste of words 
 For the same reasons, the plural pronouns we and our are not used.
(Cited, with permission, from Smyth, 1996, pp. 2–3)

CLARITY IN SCIENTIFIC WRITING

In my view, following this sort of advice obscures rather than clarifies the text. Indeed, Smyth has rather softened his views with the passage of time (see Smyth, 2004). For me, the views expressed by Fowler and Fowler in 1906, which head this chapter, seem more appropriate. Consider, for example, the following piece by Watson and Crick, announcing their discovery of the structure of DNA, written in 1953. Note how this text contravenes almost all of Smyth’s strictures cited above:
We wish to suggest a structure for the salt of deoxyribose nucleic acids (D. N. A.). This structure has novel features which are of considerable biological interest.
A structure for nucleic acid has already been proposed by Pauling and Corey. They kindly made their manuscript available to us in advance of publication. Their model consists of three inter-twined chains, with the phosphates near the fibre axis, and the bases on the outside. In our opinion this structure is unsatisfactory for two reasons: (1) We believe that the material which gives the X-ray diagrams is the salt, not the free acid. Without the acidic hydrogen atoms it is not clear what forces would hold the structure together, especially as the negatively charged phosphates near the axis will repel each other. (2) Some of the van der Waals distances appear too small.
Another three-chain structure has also been suggested by Fraser (in the press). In his model the phosphates are on the outside and the bases on the inside, linked together by hydrogen bonds. This structure as described is rather ill-defined, and for this reason we shall not comment on it.
(Opening paragraphs from Watson and Crick, 1953,
pp. 737–8, reproduced with permission from James D.
Watson and Macmillan Publishers Ltd)
Table 1.1.1 lists some of the comments that different people have made about academic text. Some consider that academic writing is spare, dull and undistinguished. Some consider that articles in prestigious journals will be more difficult to read than articles in less-respected journals ones because of their greater use of technical vocabulary. Others warn against disguising poor-quality articles in an eloquent style. Indeed, there is some evidence that journals do become less readable as they become more prestigious and that academics and students do judge complex writing to be more erudite than simpler text (Hartley et al., 1988; Oppenheimer, 2005; Shelley and Schuh, 2001). Furthermore, Sokal (1996) once famously wrote a spoof article in scientific and sociological jargon that went undetected by the editors (and presumably the referees) of the journal it was submitted to.

Table 1.1.1 Some characteristics of academic writing

MEASURING THE DIFFICULTY OF ACADEMIC TEXT

There are many different ways of measuring the difficulty of academic text. Three different kinds of measure (which can be used in combination) are: ‘expert-based’, ‘reader-based’ and ‘text-based’, respectively (Schriver, 1989).

  • Expert-based methods are ones that use experts to make assessments of the effectiveness of a piece of text. Referees, for example, are typically asked to judge the quality of an article submitted for publication in a scientific journal, and they frequently make comments about the clarity of the writing. Similarly, subject-matter experts are asked by publishers to judge the suitability of a manuscript submitted for publication in terms of content and difficulty.
  • Reader-based methods are ones that involve the actual readers in making assessments of the text. Readers might be asked to complete evaluation scales, to state their preferences for different versions of the same texts, to comment on sections of text that they find difficult to follow, or be tested on how much they can recall after reading a text.
  • Text-based measures are ones that can be used without recourse to experts or to readers, and these focus on the text itself. Such measures include computer-based readability formulae and computer-based measures of style and word use.
Two particular measures deserve attention here because they have both been used to assess the readability of academic text. One is a reader-based measure, called the ‘cloze’ test. The other is a computer-based measure, called the Flesch ‘Reading Ease’ score.

Cloze tests

The cloze test was originally developed in 1953 to measure people’s understanding of text. Here, samples from a passage are presented to readers with, say, every sixth word missing. The readers are then required to fill in the missing words.
Technically speaking, if every sixth word is deleted, then six versions should be prepared, with the gaps each starting from a different point. However, it is more common ______ prepare one version and perhaps ______ to focus the gaps on ______ words. Whatever the procedure, the ______ are scored either:
  • by ______ accepting as correct those responses ______ directly match what the original ______ actually said, or
  • by ______ these together with acceptable synonyms.
As the two scoring methods (a) and (b) correlate highly, it is more objective to use the tougher measure of matching exact words (in this case: ‘to’, ‘even’, ‘important’, ‘passages’, ‘only’, ‘which’ ‘author’ and ‘accepting’).
Test scores can be improved by having the gaps more widely dispersed (say every tenth word); by varying the lengths of the gaps to match the lengths of the missing words; by providing the first of the missing letters; by having a selection of words to choose from for each gap; or by having readers work in pairs or small groups. These minor variations, however, do not affect the main purpose of the cloze procedure, which is to assess readers’ comprehension of the text and, by inference, its difficulty.
The cloze test can be used by readers both concurrently and retrospectively. It can be presented concurrently (as in the paragraph above) as a test of comprehension, and readers are required to complete it, or it can be presented retrospectively, and readers are asked to complete it after they have first read the original text. In this case the test can serve as a measure of recall as well as comprehension. The cloze test can also be used to assess the effects on readers’ comprehension of different textual organisations, readers’ prior knowledge and other textual features, such as illustrations, tables and graphs (Reid et al., 1983).
There are few studies using the cloze test with academic text. However, it has been used (along with other measures) to assess the readability of original and revised versions of journal abstracts (Hartley, 1994).

The Flesch Reading Ease score

The Flesch score is (now) one of many easily obtained computer-based measures of text readability. The scores run from 0 to 100, and the higher the score, the easier the text. The original measure was created in 1943 by Rudolph Flesch to measure the readability of magazine articles (Klare, 1963). Basically, what current measures of the score do is to count the length of the words and the length of the sentences in a passage and compute these into a reading ease (RE) score (Flesch, 1948). The underlying logic is clear—the longer the sentences, and the longer the words within them, the more difficult the text will be. Scores can be grouped into the categories shown in Table 1.1.2.

Table 1.1.2 Flesch scores and their interpretation

Academic text typically falls into the ‘difficult’ and the ‘very difficult’ categories.
There are a number of obvious limitations to this measure (along with most other computer-based measures of readability). The formula was developed in the 1940s for use with popular reading materials rather than academic text: it is thus somewhat dated and not entirely appropriate in the current context. The notion that the longer the words and the longer the sentences, then the more difficult the text, although generally true, is naïve. Some short sentences are very difficult to understand. Thus the calculations do not take into account the meaning of the text to the reader (and you will get the same score if you process the text backwards), nor do they take into account the readers’ prior knowledge about the topic in question, or their motivation—both essential contributions to reading difficulty.
Nonetheless, despite these limitations, the Flesch score has been widely used to assess the readability of academic text, partly because it is a convenient tool on most writers’ personal computers. It is simple and easy to run and keeps a check on the difficulty level of what you are writing as you proceed. It is also useful as a measure of the relative difficulty of two or more versions of the same text—we might well agree that one version with a Flesch score of 50 is likely to be easier to read than another version with a score of 30, and that some useful information might be obtained if we use the scores to make comparisons between different texts, and between different versions of the same text.
Some examples might serve to illustrate this. My colleagues and I, for instance, once carried out four separate studies using the Flesch and other computer-based measures of text to test the idea that influential articles would in fact be more readable than would be less influential ones (Hartley et al., 2002). In the first two of these studies, we compared the readability of sections from famous articles in psychology with that of sections from the articles that immediately followed them in the same journals (and were not famous). In the second two studies, we compared the readability of highly cited articles in psychology with that of similar controls. The results showed that the famous articles were significantly easier to read than were their controls (average Flesh scores of 33 versus 25), but that this did not occur for the highly cited articles (average Flesch scores of 26 and 25).
In another study, we compared the readability of texts in the sciences, the arts and the social sciences, written in various genres (Hartley et al., 2004). Here, we compared extracts in all three disciplines from sets of research articles, text-books for colleagues, text-books for students, specialist magazine articles and magazine articles for the general public. The main finding here was not surprising—the texts got easier to read as measured by the Flesch scores as they moved across the genres, from 15 to 60. There was little support, however, for our notion that the scientific texts would be easier to read than those in the other disciplines within each of the different genres.
In a third example, we used Flesch scores, together with data from other computer-based measures, to examine the relative readability of the abstracts, introductions, and discussions from eighty academic papers in psychology (Hartley et al., 2003). Here the abstracts scored lowest in terms of readability (mean score of 18), the introductions came next (mean score of 21), and the discussions did best of all (mean score 23). Intriguingly, although the mean scores of the different sections differed, the authors wrote in stylistically consistent ways across the sections. Thus, readability was variable across the sections, but consistent within the authors.

THE STRUCTURE OF SCIENTIFIC ARTICLES

Research articles typically have a standard structure to facilitate communication, which is known as IMRAD (introduction, method, results and discussion), although, of course, there are variations on this basic format. The chapters that follow in Section 2 of this book elabora...

Table of contents