High-Throughput Screening Methods in Toxicity Testing
eBook - ePub

High-Throughput Screening Methods in Toxicity Testing

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

High-Throughput Screening Methods in Toxicity Testing

Book details
Book preview
Table of contents
Citations

About This Book

Explores the benefits and limitations of the latest high-throughput screening methods

With its expert coverage of high-throughput in vitro screening methods for toxicity testing, this book makes it possible for researchers to accelerate and streamline the evaluation and risk assessment of chemicals and drugs for toxicity. Moreover, it enables them to comply with the latest standards set forth by the U.S. National Research Council's "Toxicity Testing in the 21st Century: A Vision and Strategy" and the E.U.'s REACH legislation. Readers will discover a variety of state-of-the-science, high-throughput screening methods presented by a group of leading authorities in toxicology and toxicity testing.

High-Throughput Screening Methods in Toxicity Testing is divided into five parts:

  • General aspects, including predicting the toxicity potential of chemicals and drugs via high-throughput bioactivity profiling
  • Assessing different cytotoxicity endpoints
  • Assessing DNA damage and carcinogenesis
  • Assessing reproductive toxicity, cardiotoxicity, and haematotoxicity
  • Assessing drug metabolism and receptor-related toxicity

Each chapter describes method principles and includes detailed information about data generation, data analysis, and applications in risk assessment. The authors not only enumerate the advantages of each high-throughput method over comparable conventional methods, but also point out the high-throughput method's limitations and potential pitfalls. In addition, the authors describe current research efforts to make high-throughput toxicity screening even more cost effective and streamlined.

Throughout the book, readers will find plenty of figures and illustrations to help them understand and perform the latest high-throughput toxicity screening methods.

This book is ideal for toxicologists and other researchers who need to implement high-throughput screening methods for toxicity testing in their laboratories as well as for researchers who need to evaluate the data generated by these methods.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access High-Throughput Screening Methods in Toxicity Testing by Pablo Steinberg in PDF and/or ePUB format, as well as other popular books in Sciences physiques & Chimie industrielle et technique. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Wiley
Year
2013
ISBN
9781118538241
PART I
GENERAL ASPECTS
1
ToxCast: PREDICTING TOXICITY POTENTIAL THROUGH HIGH-THROUGHPUT BIOACTIVITY PROFILING
KEITH A. HOUCK, ANN M. RICHARD, RICHARD S. JUDSON, MATTHEW T. MARTIN, DAVID M. REIF, AND IMRAN SHAH
1.1 INTRODUCTION
Chemical safety assessment has long relied on exposing a few species of laboratory animals to high doses of chemicals and observing adverse effects. These results are extrapolated to humans by applying safety factors (uncertainty factors) to account for species differences, susceptible sub-populations, establishing no observed adverse effect levels (NOAEL) from the lowest observed adverse effect levels, and data gaps yielding theoretically safe exposure limits. This approach is often criticized for lack of relevance to human health effects due to the many demonstrated differences in physiology, metabolism, and toxicological effects between humans and rodents or other laboratory animals [1]. Such criticism exists mainly due to the lack of knowledge of specific mechanisms of toxicity and whether these are relevant to humans. Toxicological modes of action (MOA) have been elucidated for only a limited number of chemicals; even fewer chemicals have had their specific molecular mechanisms of action determined. Having such detailed knowledge would facilitate higher confidence in species extrapolation and setting of exposure limits. However, tens of thousands of chemicals currently in commerce and with some potential for human exposure lack even traditional toxicity testing and much less elucidated modes or mechanisms of toxicity [2]. Understanding mechanisms of toxicity usually results from decades-long research dedicated to single chemicals of interest, a model unsuitable for such vast numbers of chemicals. Even with dedicated research, such efforts are not guaranteed to succeed; the extended focus on understanding the mechanism of toxicity of 2,3,7,8-tetrachlorodibenzodioxin (TCDD) is an example [3]. Traditional animal testing, in addition to the criticisms discussed above, is not appropriate or feasible for the large numbers of untested chemicals due to the high costs and number of animals required [1].
One major effort to address this dilemma by providing a high-capacity alternative is underway, facilitated by integration of the fields of computational toxicology and high-throughput in vitro testing [4,5]. The ultimate goals of this approach are the means to screen and prioritize thousands of chemicals, predict the potential for human health effects, and derive safe exposure levels for the myriad of chemicals to which we are exposed. This approach relies on a shift in toxicology research away from “black-box” testing on whole animals and toward an understanding of the direct interactions of chemicals with a broad spectrum of potential toxicity targets comprising specific molecular entities and cellular phenotypes. This bioactivity profiling of chemicals generated through the use of high-throughput approaches produces characteristic chemical response profiles, or signatures, which may describe the potential for toxicity of that chemical [6].
Computational analysis and modeling of the results are required to provide insight into complex datasets and support the development of predictive toxicity algorithms that ultimately may serve as the foundation of an in vitro toxicity testing approach replacing most or all animal testings. The groundwork required for a computational toxicology approach is the generation of datasets comprising the quantitative effects of chemicals on biological targets. Two types of data are required. The first are the test results from in vitro and/or in silico assays that can be run in high-throughput mode and provide bioactivity profiles for hundreds to thousands of chemicals. The second is a dataset that details the effects of these chemicals on whole organisms, ideally the species of interest. These data are used to anchor and build predictive models that can then be applied to chemicals that lack in vivo testing. Generation of the in vitro dataset has become feasible and widely available as high-throughput in vitro screening technology, developed in support of the drug discovery community. The selection and use of these assays for computational toxicology will be discussed further in Section 4. Obtaining the latter dataset of in vivo effects necessary to build the computational models presents unique challenges. Although thousands of chemicals have been tested using in vivo approaches, only a limited amount of this information has been readily available. Much of it lies in formats not readily conducive to computational analysis, for example, paper records, in the data stores of private corporations, or protected by confidentiality clauses [7], and generation of extensive new in vivo data to support the approach is cost prohibitive. The access and collation of these data into a relational database useful for computational toxicology will be discussed in Section 5.
Beyond the technical aspects of generating the data, assembling the collection of required datasets to support computational approaches is a challenging task in itself. Robust, efficient, and accurate knowledge discovery from large datasets require a robust data infrastructure. There are a number of critical steps in the process beginning with designing an underlying architecture to manage the data. Appropriate data must be selected and preprocessed into common formats usable to computer programs (e.g., standardized field names for the types of attributes being measured, standardized chemical names and links to other data sources). The use of standardized ontologies can be particularly useful in the sharing of information across organizations [8]. Because of the complexities of achieving this on a large scale, these approaches are perhaps best conducted by large organizations with access to computational scientists in addition to experts in chemistry, toxicology, statistics, and high-throughput screening (HTS). Examples of integration of these diverse areas of expertise include the U.S. Environmental Protection Agency's (EPA) ToxCast program [4] and the Tox21 collaboration between the EPA, the National Toxicology Program, the National Institutes of Health Center for Translational Therapeutics-–NCTT (formally the NIH Chemical Genomics Center [NCGC]), and the U.S. Food and Drug Administration [9,10]. In addition, a number of large pharmaceutical companies have internal programs in this area relying on their own, extensive in-house expertise [11,12].
As described, the ultimate goal is to use high-throughput in vitro assays to rapidly and inexpensively profile the bioactivity of chemicals of unknown toxicity and make predictions about their potential for causing various adverse endpoints [4]. Achieving a robust, predictive toxicology testing program is a long-range goal that will need to proceed through a number of systematic stages including proof-of-concept, extension of chemical and bioassay diversity, refinement, and ultimately, supplementation or replacement of existing methods. The initial stage involves multiple steps including (1) selecting an appropriate chemical test set for which in vivo data are available; (2) selecting high-throughput biological assays for screening the chemicals; (3) generating the screening data on the chemicals; (4) collating the in vivo anchoring data for the chemicals; and (5) building up predictive models. Such models can then be validated through testing of additional chemicals with known toxicity endpoints to determine the robustness of the models. It is likely that the development of the test systems, as well as the computational models, will be an iterative process. New biological assays and statistical approaches are evaluated for potential inclusion in the program, whereas assays and models not producing useful results are dropped.
The success of this stage of the process would be models judged useful for prioritizing chemicals for the potential to cause specific toxicity endpoints. This prioritization will be valuable in the short term by allowing focused and limited in vivo use of testing resources on chemicals most likely to be of concern. The results of targeted testing of designated chemicals for specific endpoints should ensure a reduced use of test animals as only limited endpoints would need to be evaluated. This targeted testing will also provide an additional validation method for the testing program, that is, do the adverse endpoints predicted by the models occur to a significant extent in the tested chemicals? Ultimately, refinement of the testing and modeling approaches should allow high-confidence prediction of the likelihood for toxicity, thereby avoiding animal testing altogether for many chemicals. The remainder of this chapter will focus more specifically on providing background on the steps undertaken in developing the initial stages of the ToxCast testing program at EPA, as well as examples of applications of the program in prioritizing environmental chemicals for multiple toxicity endpoints.
1.2 CHEMICAL LANDSCAPE
A major driver of the development and use of HTS methods in toxicology is the scope of the chemical problem, that is, tens of thousands of chemicals to which individuals are potentially exposed, the majority of which have never been tested in any significant way [2] . What chemicals are of interest and the kind of data that is likely to be available depends on the use of the chemical, which in turn is related to the regulations to which the chemicals are subjected. To understand the world of chemicals that are of concern for potential toxicity and candidates for testing, it is useful to discuss a set of chemical inventories, some of which are overlapping.
1.2.1 Pesticide Active Ingredients
These are typically the active compounds in pesticide formulations, which are designed to be toxic against select types of organisms. A related category of compounds falling under this general label are antimicrobials, which are also designed to be toxic to certain organisms, in this case-targeting fungi or bacteria. These groups of chemicals are further divided into food-use and nonfood-use actives for the purpose of regulation. EPA sets tolerance levels for pesticides that may be used in specific foods, for particular reasons, and at particular exposure levels. Thus, EPA regulates the maximum amount of pesticide residue permitted to remain on a food approved for pesticide application. FDA, in contrast, has the authority to monitor and enforce levels of food-use pesticides and ensure that they comply with EPA regulations. FDA has additional authority regarding the use of antimicrobials in food packaging [13]. Food-use pesticide actives have the highest data requirements and, for these, a company will typically generate data from 2-year chronic/cancer bioassays in rats and mice, developmental toxicity studies in rats and rabbits, multigenerational reproductive toxicity studies in rats, and other specialized in vivo studies [14]. These are similar to the complete set of preclinical studies that are required for human pharmaceuticals. Because of this large data requirement, these chemicals are ideal for use in building up toxicity prediction models, since one will have near-complete in vitro and in vivo datasets. It is not surprising that pesticide actives have some of the same features and chemical properties as pharmaceutical products, given that they are often designed to interact with a specific molecular target.
1.2.2 Pesticidal Inerts
These are all of the ingredients in a pesticide product or formulation other than the active ingredients. Although they are labeled as “inert”, there is no requirement that they be nontoxic. These can range from solvents (e.g., benzene) to animal attractants, such as peanut butter or rancid milk. As with the actives, inerts are classified as food-use and nonfood-use. Regulatory data requirements are, in general, limited, thus resulting in the availability of little in vivo data [15].
1.2.3 Industrial Chemicals
This is an extremely broad class of chemicals in...

Table of contents

  1. Cover
  2. Title Page
  3. Copyright
  4. Preface
  5. Contributors
  6. Part I: General Aspects
  7. Part II: High-Throughput Assays to Assess Different Cytotoxicity Endpoints
  8. Part III: High-Throughput Assays to Assess DNA Damage and Carcinogenesis
  9. Part IV: High-Throughput Assays to Assess Reproductive Toxicity, Cardiotoxicity, and Haematotoxicity
  10. Part V: High-Throughput Assays to Assess Drug Metabolism and Receptor-Related Toxicity
  11. Index