Then I decided to tell him a little story: The other day a man walked into the new dinosaur exhibit hall at the American Museum of Natural History in New York and saw a huge skeleton on display. He wanted to know how old it was, so he went up to an old curator sitting in the corner and said, ‘I say old chap, how old are these dinosaur bones?’
The curator looked at the man and said, ‘Oh, they’re sixty million and three years old, sir.’
‘Sixty million and three years old? I didn’t know you could get that precise with ageing dinosaur bones. What do you mean sixty million and three years old?’
‘Oh well,’ he said, ‘they gave me this job three years ago and at the time they told me they were sixty million years old.’
(Ramachandran, 2005: 18–19)
Introduction
There can be no doubt that in the modern world we are reliant on ‘best evidence’. Governments act on the basis of what the science tells them – in the UK, for instance, the well-publicised responses to outbreaks of mad cow disease, foot and mouth disease and avian flu in the last ten years or so have all been relayed to the general public as the government acting on what the scientific evidence says. We see it in the way that medical practitioners are advised to prescribe medication on the basis of economy through a meta-analysis of clinical trials and surgical techniques, and we are now beginning to see it with regard to climate change and the ways in which energy needs can be best accommodated in the future. This fixation with best evidence has slowly but surely entered the various fields in which people work with other people. This may be direct work that helps those in need, such as health care, social care or education, or it may be indirect work, such as observing human beings and their interactions with the world in order that we may understand ourselves better, as in the social sciences.
I do not suggest that we should not work to what is considered at the time to be best evidence, for it clearly has an important place, but I would suggest that focusing so heavily on the ideal of best evidence through positivism is detrimental to wider notions of learning and understanding, particularly as there are only a limited number of research approaches which are considered to be scientifically ‘sound’. We have even seen this in recent years in the UK higher education system, once a bastion of new knowledge and academic endeavour, whereby universities have been scrutinised through the Research Assessment Exercise and any published works are subject to, and only considered valid if they are grounded in, scientific rigour. In short, the practice of recording what is observable and narrow is prioritised over more theoretical work that could open up a field of inquiry rather than close it down. The field of health care, more so than social care or education, has historically wrestled with the issues in the debate on art versus science, but the question is now spreading into these other domains too.
Now I shall review this debate and examine it in the relatively recent light of evidence-based practice and reflection, which I believe effectively replaces the arts versus science terminology. Evidence-based practice is a term and a practice commonly used in health and social care. It is possibly less well known in the field of education, but nonetheless the principles remain the same: essentially it is used to perceive a problem, seek out the most valid research evidence to address that problem, and then apply it to the problem, the results of which can be measured as to their efficacy. In essence, this approach is distinctly different to experiential learning, which is viewed currently as subjective and difficult to measure accurately. However, the nature of research and evidence is open to constant debate and contest. Initially, I would like to provide a small vignette on the development of evidence-based medicine and the conflicts which have existed (and still do exist) about what is valid. From this I will broaden the discussion into one of evidence-based practice, experiential learning and reflection more generally.
The roots of evidence-based practice – a very brief history of medicine
The basis of what we now understand broadly as evidence-based practice has at its heart the role of evidence in medicine – first in determining it as a profession with unique knowledge and power (see Illich, 1977, which is as valid today as it was then), and second as a way in which interventions can be known to be effective and standardised. Whilst the construct of evidence-based medicine might be relatively new, the use of evidence-based intervention is not, and it is the correlation between cause and effect that has been sought and illustrated since man first documented his existence.
As long as we have existed we have desired – needed, perhaps – to make connections between natural phenomena and to use them for our wellbeing. Although theories have changed across a wide expanse of time and geography, they form part of the continuum of the use of evidence to secure our longevity. The understanding of correlations and causations between the universe and illness may have been different – for instance, Inglis (1965) writes of the relationships ascribed to illness and cosmic phenomena in the ancient civilisation of Babylonia. Today, we know that weather can affect such conditions as asthma and seasonal affective disorder, and this may have comprised evidence for the Babylonians. Inglis (1965) also notes that the Babylonians had a professional rule book for the practice of medicine: the Code of Hammurabi (1790 BC), which existed to protect citizens rather than to provide any peace of mind for the practitioner, and could have brutal consequences for those who got their intervention wrong.
Roy Porter (1996) confirms that fees paid to a healer were specified on a sliding scale, depending on the status of the patient, and that draconian measures similar to those imposed on incompetent shipwrights and architects were implemented for failure.
This form of regulation – although not quite so punitive in the twenty-first century AD – is not dissimilar to that which exists today, for it places responsibility on the practitioner to know what he is doing through what is known. In the modern day this is enshrined legally, for only those who are appropriately trained and qualified are able to diagnose, prescribe and treat. There are other similarities with the past, too. Porter (1996) recognises that competing healers in Greek society, such as bone-setters, surgeons, physicians and exorcists, formed a market place from which choices of treatment were left to individual patients to decide, but that this market existed within a set of rules for medical ethics and advertising, much in the way that today a governmental rhetoric of ‘choice’ is espoused as policy. Arguably this principle applies to the wider delivery of health and social care through the ‘care management’ process, in which local authorities assess, plan and cost care needs, then turn to the market place for tender. The care provider who appears to deliver the best value for money is then funded and regulated. However, the representatives of the state now decide, not the individuals in need of care. As well as these market place approaches to health and social care, in the UK changes are taking place in compulsory education through the development of foundation schools – a new form of school governance designed to create competition through the guise of quality improvement.
The early Greek understanding of medicine was similar to that of the Babylonians, relying mainly on cosmic or mythical interventions for cure, but it was transformed in its later period by the introduction of rationality and systematics to Greek thought. Hippocrates has long been attributed with leading this transformation in medicine by using a systematic approach based on observation, comparison and recording of disease – however, it is probably wrong to suggest that Hippocrates himself was responsible for this, as it is likely that the scientific nature of Greek thought meant that he was but one of many involved in this process; arguably, as Inglis (1965) suggests, he may simply have been in the right place at the right time. Two main forms of medicine existed during this period: allopathy – a doctrine of contraries; and homeopathy – a doctrine of similars. In their simplest forms, these approaches could be easily divided. Allopathy suggested that to confront a symptom with something that restored a balance was the correct course of action: so, to reduce a fever, the patient must be cooled down; similarly, a constipated patient must be given a laxative. Homeopathy, suggested through Hippocratic teaching, was grounded in the notion that each person had a life force – a protective entity that acted to safeguard the individual. In this view the life force could act to prevent the possibility of disease – for instance, vomiting or the formation of boils as symptomatic of disease was not necessarily due to the disease itself, but to ward it off. In this case, rather than preventing a symptom such as vomiting – as allopathy would prescribe – the purpose of the intervention would be to assist the life force by encouraging the vomiting in order to throw off the disease or poison. Although these doctrines existed as separate sects, it was inevitable that physicians would come to use them interchangeably, based on their observations of outcomes, and in this we begin to see the birth of empiricism, but not as we now know it.
The first to bring these differing approaches together was Galen, born in the ancient Greek city of Pergamum, who spent the majority of his medical life attending to emperors in Rome during two periods totalling over thirty years. Galen was renowned for his ability to diagnose and treat, but he was also a keen recorder of his work and findings as well as a perceptive anatomist, and he strongly advocated placing anatomy as central to medicine, rather than cosmology or mythology (Porter, 1996). Galen’s approach made him useful to teachers and students of medicine. Inglis (1965) writes that Galen practised and preached a mixture of personal experiences and rival theories fused together in a somewhat unorthodox Pythagorean structure, for although it followed a rationalist process it did not quite add up, and would be distorted and oversimplified in order to appear coherent. He also used and exhorted the value of polypharmacy. As Inglis (1965: 38) notes, he wrote that ‘It is the business of pharmacology to combine drugs in such a manner … as shall render them effective in combating or overcoming the conditions which exist in all the different diseases.’ Of course, in reality this meant experimentation. If one drug did not work then another would be tried, or a different dosage, or a combination of drugs. Although the results of this practice were erratic, Galen believed that his methods were empirical. A point had been reached where some results were striking, but others were not. Medicine as a discipline was now formulating some theories about what works, but not how it works, and was still considered by many – including Galen himself, to an extent – to be linked to supernatural forces.
What followed were some first steps into medical orthodoxy – the truth – which were inclusive of all these factors. In terms of discovery and research, the Romans were keen anatomists. Physicians were limited in knowing how the body works by having access to it only in its deceased state. It would be far better, it was surmised, if it could be opened up and...