Section 1
Introduction
1 Biomarker Applications in the Pharmaceutical Industry
2 Impact of Biomarker Qualification Regulatory Processes on the Critical Path for Drug Development
3 Regulatory Experience of Biomarker Qualification in the EMA
4 Regulatory Experience at the FDA, EMA, and PMDA
1
Biomarker Applications in the Pharmaceutical Industry
William B. Mattes, PharmPoint Consulting, Poolesville, Maryland, USA
The âexplosionâ of biomarker research is at least partially a semantic contrivance: as Ian Dews and others have noted; âbiomarkersâ are not new [1]. Rather, as Dews notes, âthe word gave a long-overdue nameâ to characteristics noted and monitored by the health care profession for at least three millennia for the purposes of diagnosis or prognosis. Indeed, monitoring the pulse for the purpose of assessing the degree of injury is mentioned in the Edwin Smith Papyrus, describing Egyptian medical practice ca. 1500 BCE [2], and uroscopy as a âscienceâ is considered to date from Hippocrates [3]. The often quoted definition of the word, developed by the Biomarkers Definitions Working Group in 2001, is:
âa characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic responses to a therapeutic interventionâ [4].
One might debate as to whether early uroscopy and the evaluation of pulse were âobjectively measuredâ, yet certainly the definition applies to endpoints described prior to the first published use of the word âbiomarkerâ. That distinction goes to a 1977 paper examining the hypothesis that serum ribonuclease levels were a âbiomarkerâ of myeloma tumor cells [5]. However, the first publication in PubMed associated with biomarker as an index term dates to a 1947 study of fetuin-A [6], which more recently has been shown to serve as a âbiomarkerâ of coronary artery disease [7] and neurodegenerative disease [8]. Thus, while there has been an âexplosionâ of publications using the term âbiomarkerâ, it was preceded by an âexplosionâ of publications examining endpoints indexed as biomarkers (Fig. 1.1), which even in 2009 surpassed by a factor of 10 the number of publications actually using the term. The point is that âbiomarkersâ have only recently received intentional discussion of their discovery, definition, application, and qualification.
FIGURE 1.1 Biomarker publications indexed in PubMed. Numbers were determined using Alexandru Dan Corlan. Medline trend: automated yearly statistics of PubMed (http://dan.corlan.net/medline-trend.html).
With the advent of technologies that allowed the precise determination of DNA sequence, and gene expression, a new form of biomarker entered the pantheon â the genomic, or pharmacogenomic, biomarker. While genetic disorders such as Downâs syndrome could be characterized on a gross chromosomal level, determination of genetic polymorphisms, i.e., inter-individual variations in the sequence of genes, allowed for the correlation of DNA sequence variations with phenotypes relevant to disease and drug treatment. The problem of anticipating individual disease susceptibility or therapy response seemed to become tractable, as the tools gave clear, binary answers as to whether a given gene locus has a particular DNA sequence. In the 1990s the impact of genetic polymorphisms on drug metabolism in both animals and humans became clear and resulted in considerable research. Combined with the ancillary techniques for determining the individual enzymes responsible for a given drugâs biotransformation in both preclinical species and in humans, pharmacogenomic approaches offered the promise of tailoring drug development programs such as to avoid or minimize the role of inter-individual variation in ADME (absorption, distribution, metabolism and excretion). The interest in the drug development community led to a survey of industry practices [9], an International Conference on Harmonization (ICH) guideline on definitions and sample handling [10], and guidance documents published by both the European Medicines Agency (EMA) [11] and the US Food and Drug Administration (FDA) [12]. The latter document suggests principles for the use of such pharmacogenomic biomarkers in early (i.e., exploratory) clinical studies, such as identifying populations that may require dosing adjustments or groups that might be at high risk for drugâdrug interactions. The document also notes that such pharmacogenomic biomarkers may also include those other than for drug ADME, for example genetic variations in the drug target, and early clinical studies could identify those genetic variants most likely to respond to therapy, and use that information for patient enrichment strategies [13] in later clinical trials. The guidance acknowledges that in many cases samples may be collected during a given clinical trial and pharmacogenomic analysis conducted retrospectively to determine possible causes for toxicity or lack of efficacy. Indeed the industry predilection for retrospective, in contrast to prospective, analysis of pharmacogenomic biomarkers is echoed in an analysis of industry practices over the time frame 2003â2008 [14].
The relative caution applied to the application of pharmacogenomic biomarkers may reflect a more general caution in applying relatively novel biomarkers, particularly in regard to clinical trial design. Indeed such designs that can incorporate biomarker analysis as a variable in addition to the treatment variable have been the subject of many publications [15â18]. However, as noted above, âbiomarkersâ have been discovered and applied for millennia, and their use qualified through a variety of approaches. While it is not the intent of this chapter to discuss qualification, it is the intent of this book to present examples of the approaches that have recently been used to do that.
The history of biomarkers reflects the inherent assumption that any given biomarker was almost certainly suitable for only a limited number of uses or applications. Such applications have been reviewed and classified in many publications and books, but for the purposes of this book it is worthwhile to consider the various types of biomarkers and applications, as these classifications have impact on the approaches taken toward qualifying a type of biomarker for a type of application.
Before considering the various types of biomarkers and their applications, it is appropriate to address the one application that may be considered âthe elephant in the roomâ â surrogate endpoints. The Biomarkers Definitions Working Group defined a âsurrogate endpointâ as:
âA biomarker that is intended to substitute for a clinical endpoint. A surrogate endpoint is expected to predict clinical benefit (or harm or lack of benefit or harm) based on epidemiologic, therapeutic, pathophysiologic, or other scientific evidence.â [4]
Indeed a âclinical endpointâ is âa characteristic or variable that reflects how a patient feels, functions, or survivesâ [4]. As such, clinical endpoints such as overall survival (OS, the time from randomization to death from any cause), are regarded as the most rigorous and credible measures of clinical benefit from therapeutic intervention. On the other hand, such clinical endpoints generally dictate a large study sample size and duration, and are influenced by multiple factors [19,20]. Conceptually a surrogate endpoint is an endpoint that responds to therapeutic intervention in a shorter time frame and/or with a smaller sample size than the clinical endpoint which it is substituting for. Thus a surrogate endpoint serves to accelerate the drug development and regulatory registration processes [4,21â23]. Such an application makes regulatory acceptance of a biomarker as a surrogate endpoint highly desirable, 1) to the pharmaceutical industry as a means of both reducing cost and time to market, and 2) to patient advocates as a means for bringing promising therapies into practice sooner [24]. However, an effect on a surrogate endpoint usually is not in and of itself a benefit to the patient; rather the value of a surrogate endpoint response is in its connection to a subsequent clinical outcome [25]. As one clear example of a surrogate endpoint, accepted by both clinicians and regulators, blood pressure has been convincingly shown to be associated with cardiovascular disease risk in numerous epidemiological studies. Importantly, blood pressure responds to therapeutic interventions that improve cardiovascular clinical endpoints (e.g., reduce incidence of stroke) [25]. Adding confidence to the use of blood pressure as a surrogate endpoint is its response to interventions of several different types, including calcium channel blockers, diuretics and angiotension-converting enzyme inhibitors. Such a volume of supporting data is often not available at the time a biomarker is proposed as a surrogate endpoint. Statistical approaches to confirm a biomarker as a surrogate endpoint were proposed by Prentice in 1989 [26] and continue to evolve [27], but more often than not, biomarkers are suggested for use as surrogate endpoints on the basis of correlations and mechanistic assumptions. The reliance on correlation has been called into question [28]. Recently, examples where surrogate endpoints have been used to guide clinical trials, but failed to accurately anticipate clinical outcomes [29] have led to skepticism and concern over their use [30], particularly in an accelerated drug approval process [22]. Hence, the subject of biomarkers as surrogate endpoints remains both attractive and controversial. Given the important need to develop treatments for chronic and debilitating conditions such as Alzheimerâs disease and chronic obstructive pulmonary disease, the slow progression of these diseases, and the lack of satisfactory clinical endpoints [20,31], the debate over the best process for efficiently identifying a biomarker as a surrogate endpoint will certainly continue.
For the most part there are three broad areas of applications biomarkers have in both drug development and clinical practice: diagnosis, prognosis and intervention management. Diagnosis is concerned with determining the current state of the subject and the existence, extent and characteristics of any disease condition. From a temporal standpoint, diagnosis is focused on the present, and as such diagnostic biomarkers can be benchmarked against other concurrent observations. Prognosis involves a prediction of the probable course and/or outcome of a disease, or risk of future disease in an otherwise healthy individual. Both of these applications invoke elements of both time and chance, i.e., probability. The multifactorial nature of causality for most diseases makes their development from any given point in time a stochastic process [32], clouding the relationship between any given observation or factor and a diagnosis of disease at a later point in time. While prognostic biomarkers are vitally important, the elements of time and probability make their qualification and application problematic. Indeed, considerable debate and research is ongoing over the methodological and statistical approaches for qualifying predictive/prognostic biomarkers [33â37], highlighting the challenges in their application. The third application of biomarkers, intervention management, may actually be an extension of the use of diagnostic and prognostic markers, but is worth considering as distinct, as the characteristics of intervention are driven by prior diagnostic and/or prognostic tests.
Diagnostic Applications
Single and Multiplex Biomarkers
As noted, diagnosis is the determination of an existing state or its characteristics. Such âstatesâ could include pathobiology, disease, toxicity or adverse reaction, or exposure to an environmental agent. In clinical practice, diagnosis is usually approached in response to symptoms that a patient presents, such as fatigue or unexplained weight loss. In some cases, a single biomarker may suffice to enable the diagnosis. Thus, a blood glucose level of 200 mg/dL or higher, plus the presence of the previously mentioned symptoms, is strongly diagnostic for diabetes [38]. Similarly detection of the exotoxin produced by toxigenic strains of Corynebacterium using an enzyme immunoassay serves as rapid diagnosis for serious diphtheria infections [39]. In the case of heavy metals such as cadmium, exposure can be diagnosed by direct measurement of the element in urine, blood or tissue [40]. More commonly, a patientâs symptoms or condition can be the result of many different etiologies; proper understanding and treatment requires âdifferential diagnosisâ with the use of multiple biomarkers designed to rule out one or more of these etiologies...