Introduction
Research related to the health of humans should have the potential to advance scientific understanding or improve the treatment or prevention of disease. The expectation is that an account of the research will be published, communicating the results of the research to other interested parties. Publication is generally in the form of articles in scientific journals, which should describe what was done and what was found. Reports of clinical research are important to many groups, especially other researchers, clinicians, systematic reviewers, and patients.
What do readers need to know? While there are multiple aspects to that question, and the specifics vary according to the nature of both the research and the reader, certain broad principles should be unarguable. Obviously, research reports should be truthful and should not intentionally mislead. As noted by the International Committee of Medical Journal Editors, “In return for the altruism and trust that make clinical research possible, the research enterprise has an obligation to conduct research ethically and to report it honestly” [3]. In addition, research reports must be useful to readers – articles should include all the information about methods and results that is essential to judge the validity and relevance of a study and, if desired, use its findings [4]. Journal articles that fail to provide a clear account of methods are not fit for their intended purpose [4].
A vast literature over several decades has documented persistent failings of the health research literature to adhere to those principles. Systematic reviews are a prime source of evidence of these failings (Box 1.1). In addition, hundreds of reviews of published articles, especially those relating to randomized controlled trials (RCTs), have consistently shown that key information is missing from trial reports [5, 6]. Similar evidence is accumulating for other types of research [7–11]. Without a clear understanding of how a study was done, readers are unable to judge whether the findings are reliable. Inadequate reporting means that readers have to either reject an article or take on trust that the study was done well in order to accept the findings.
Box 1.1 Examples of poor reporting highlighted in systematic reviews
“Risk of bias assessment was hampered by poor reporting of trial methods [64].”
“Poor reporting of interventions impeded replication [65].”
“15 trials met the inclusion criteria for this review but only 4 could be included as data were impossible to use in the other 11 [66].”
“Poor reporting of duration of follow-up was a problem, making it hard to calculate numbers needed to treat to benefit … one of the largest trials of the effects of cardiac rehabilitation, which found no beneficial effect, is yet to be published in a peer-reviewed journal over a decade after its completion [67].”
“Four studies compared two different methods of applying simultaneous compression and cryotherapy, but few conclusions could be reached. Poor reporting of data meant that individual effect size could not be calculated for any of these studies. Furthermore, two studies did not provide adequate information on the mode of cryotherapy, and all failed to specify the duration and frequency of the ice application [68].”
“With more complete reporting, the whole process of evaluating the quality of research should be easier. In my work as a systematic reviewer, it is such a joy to come across a clearly reported trial when abstracting data [69].”
This situation is unacceptable. It is also surprising, given the strong emphasis on the importance of peer review of research articles. Peer review is used by journals as a filter to help them decide, often after revision, which articles are good enough and important enough to be published. Peer review is widely believed to be essential and, in principle, it is valuable. However, as currently practised peer review clearly fails to prevent inadequate reporting of research, and it fails on a major scale. This is clear from the fact that the thousands of studies included in the literature reviews already mentioned had all passed peer review. And articles published in the most prestigious (and highest impact) journals are not immune from errors as many of those literature reviews focussed entirely on those journals [12–14]. Peer review (and other quality checks such as technical editing) clearly could be much more effective in preventing poor quality reporting of research [15].
The abundant evidence from reviews of publications shows that ensuring that reports are useful to others does not currently feature highly in the actions, and likely the thinking, of many of those who write research articles. Authors should know by now that it is not reasonable to expect readers to take on trust that their study was beyond reproach. In any case, the issue is not just to detect poor methods but, more fundamentally, simply to learn exactly what was done. It is staggering that reviews of published journal articles persistently show that a substantial proportion of them lack key information. How can it be that none of the authors, peer reviewers, or editors noticed that these articles were substandard and, indeed, often unfit for purpose?
In this chapter, we explore the notion of tra...