The Challenges of Predicting Toxicity

There are many challenges that have to be overcome before one can accurately predict the toxicity of a novel chemical structure. Fundamental issues, such as the complex nature of biological systems and the ability of chemicals to interact with a biological system in multiple ways that could all potentially manifest themselves as similar toxic events, inevitably mean that the problem of predicting toxicity is not straightforward.

The diverse types of information that relate to the characterization of toxicological risk of chemicals3 can be represented schematically as layers of an onion,4 with the chemical of concern at the center, and the human health risk assessment in the outermost layer (Figure 1). The intervening layers range from calculated or measured physical properties, to mechanistically defined biochemical interactions, to in vitro cell culture bioassays, to in vivo responses in whole animals and populations. The radius of each layer increases according to the level of biological complexity and proximity to human health risk assessment, whereas the boundaries between the layers separate physically and conceptually distinct types of information. Structure-activity relationships (SARs) are models that attempt to extrapolate from the center of the onion, i.e., the chemical structure, to different types of biological endpoints.

Figure 1 Layers of chemical and biological organization relevant to the toxicity prediction problem. (Adapted from Richard, A. M. Knowledge Eng. Rev. 1999, 14, 1-12.) The nature of toxicity data

Most toxicity data consist of observations of the biological system under treatment in relation to a control group. Rarely are these observations done at a molecular level, making relationships to chemical structure difficult to identify. Add to this, species differences and genetic variations within a single population and between populations themselves, making the issues even more complex. Factors affecting the determination of mechanisms of toxicity

The dose, metabolic fate, and exposure of a compound are also factors in determining the potential for toxicity that need to be considered in any mechanistic evaluation. The relationship between dose and response is one of the most fundamental concepts of toxicology, which is often forgotten in the public debate about safety of new medicines.

''What is there that is not a poison? All things are poison and nothing is without poison. Solely the dose determines that a thing is not a poison.'' (Paracelsus, 1493-1541.)

The metabolic fate of a chemical, or the susceptibility of a compound to undergo biological transformations, can have a profound effect on its ability to cause toxicity. Often a drug compound itself can be benign, but can be metabolized to a reactive intermediate that can elicit a toxic response following exposure to the drug. In other instances, metabolism can lead to detoxification of a drug molecule. Kalgutkar et al.5 provide an extensive review of typical substructures that have been shown to be metabolized to reactive intermediates and have been associated with the expression of adverse drug reactions (ADRs).

One classic example, where excessive dose and metabolic activation can combine to cause toxicity, is paracetamol. When a person takes paracetamol, metabolism in the liver produces small amounts of N-acetyl-p-benzoquinone imine (NAPQI) (Figure 2). Under normal dose levels, this potentially toxic compound can quickly form conjugates with glutathione, thus neutralizing the toxic effects of NAPQI.

The problem occurs when the body runs out of glutathione. As glutathione stores are diminished, NAPQI is not detoxified, and covalently binds to the lipid bilayer of hepatocytes, causing centrilobular necrosis, resulting in hepatotoxicity. The maximum daily dose of paracetamol is 4 g in adults and 90 mgkg _ 1 in children. A single ingestion

0 0

Post a comment