The recent series of posts have featured different levels of recognition of environmental poisons and related defense processes, ranging from taste receptors to drug export mechanisms. But one layer has not been addressed so far, and this is the present theme.
This oversight on my part was drawn to my attention by no less than a domestic cat, not normally known for paying much attention to blogs of any description. In my presence, this little animal suddenly vomited up half a cigarette, an improbable addition to what most observers would consider a healthy feline diet. Forgive my bringing this up, so to speak, but it did serve as the springboard for further thoughts. It’s not clear what the consequences of ingesting cigarettes would be for cats, but presumably it would not have much nutritional benefit. But whatever prompted eating this suspect item in the first place (which we’ll consider a little further towards the end of this post), the cat’s stomach (or at least some part of its digestive apparatus) strongly and emphatically pressed a metaphorical reject button, and saved this pet from perhaps an unpleasant nicotine-related encounter. (Rest assured, she was none the worse for the experience, and didn’t have to clean up the mess).
In previous posts (28th March and 30th May of 2012), the bitter taste receptors were considered as a frontline defense against ingestion of environmental poisons. Anything getting past this guard level then can potentially be neutralized through a variety of xenorecognition and xenoprocessing mechanisms (also considered in a previous post). Indeed, but in between lies another level of defense, as any cigarette-eating cat could show you. Bad things that get past the oral cavity into the stomach can potentially be prevented from proceeding to do harm to the whole organism, if they can be physically ejected as soon as possible. Regurgitation can at least greatly reduce a toxic load, potentially bringing down the exposure to levels manageable by other xenoprocessing mechanisms, and thereby having life-saving (and in turn, evolutionary fitness) implications. This area might seem trivial, but further thought shows that it certainly is not. Regurgitation is a complex and coordinated series of muscular actions, which clearly must have some kind of trigger to initiate. What external agents then stimulate this response, how are they recognized, and how is the resulting reflex produced?
Emetogens and Their Receptors
A particular focus of attention in the field of emesis has arisen as a result of empirical results in cancer chemotherapy over decades of its application and continuous refinement. Put simply, in the absence of simultaneous anti-emetic treatments, some anticancer drugs are highly emetogenic (inducing nausea and vomiting), but there are marked differences in their relative potencies in this regard. For example, the well-known drug cisplatin, a tremendous advance in the treatment of certain tumors, is nonetheless notorious for its emetogenic effects. On the other hand, drugs such as vincristine and bleomycin are very low in inducing this highly unpleasant side-effect, although they certainly must be administered with great care due to their toxicities. (Conventional anticancer drug cytotoxicity typically has low selectivity towards tumors, and thus any dividing host cell may be potentially affected as ‘collateral damage’).
Much clinical research has understandably focused on ways for minimizing the distressing induction of emesis by the necessary anticancer regimens. Effective anti-emetic drugs target relevant neural receptors (such as the 5-hydroxytryptamine(serotonin)3 receptor) involved in transmission of the emetogenic signals. As one would expect for a complex behavior pattern, emesis is ultimately controlled by the brain. In terms of the complexity of emetic effects, it should be noted that in addition to specific substances, vomiting can be induced by pregnancy or physical stimuli (as with motion sickness) or arise from psychogenic origins (consider any distressing influence which is literally sickening). At one time, a specific neural center was postulated to act as an emetic controller, but more recent evidence suggests cooperating regions of the medulla oblongata (in the hindbrain) are involved. Input signaling implicates a region of the medulla called the Area Postrema, which very significantly is not restricted by the blood-brain barrier, and thereby able to potentially sample blood-borne xenobiotics. In addition, other evidence suggests emetogenic primary signaling originates from intestinal sites. Gut vs. blood-borne sensing might be viewed as two separate levels of emetogenic detection, since orally ingested poisons will normally encounter the gut receptors first. Nevertheless, in both cases the chemosensing and neural transduction of signals have common results.
Yet this information does not directly address the nature of the chemoreception which transduces toxin-induced emetic signaling in the first place, and it is apparent that there is still much to be learned in this area. It would seem reasonable to postulate a role for bitter taste reeeptors in this signaling process, based on the assumption that specific chemoreceptors are involved. This follows from relatively recent observations showing that the TAS2R bitter receptors are not only expressed in taste buds, but at a number of distinct anatomical sites, including the gut and the brain. (This was also alluded to in the previous post). More indirectly, the redeployment of a primary xenoreceptor set in a second-round protection mechanism would from first principles appear to be a parsimonious evolutionary pathway.
Still, no evidence appears to support this proposal at present. But if TAS2R receptors were involved, it might be predicted that at least a broad correlation would exist between the perception of bitterness and emetogenicity of a compound. (In other words, this would propose that the more bitter a compound, the more it would tend to induce emetic effects). But this proposition can immediately be challenged on several grounds. Firstly, emesis can be induced by sufficient concentrations of simple salts (such as lithium chloride or copper salts), which do not engage bitter taste reception. And secondly, no evidence suggests any significant correlation between the degree of bitterness and emetogenicity of a compound, although systematic information in this regard seems to be lacking. One problem in this regard is measurement of emetogenic potential itself, and its variation between species. (Obviously, human experimentation in this area can have many ethical constraints). But the absence of discernable linkage between bitterness and emetic potency is conveyed through the bitterest known compound, ‘denatonium’, an artificial derivative of the anesthetic lidocaine. Despite this compound’s intense bitterness, it has low toxicity relative to many natural bitter substances (noted further below). While denatonium salts are likely to induce emesis if the dose is high enough, this question does not appear to have been systematically studied. But at least, if the emetogenic signal paralleled the bitter perception, denatonium would also be the most potent emetogen, and there is certainly no evidence for this. For another piece of relevant information, the low emetogenicity of the anticancer drug vincristine (noted above) is notable with respect to its nature as a bitter-tasting plant alkaloid. Therefore, bitterness per se and emesis cannot be closely associated.
Nevertheless, these observations do not rule out a role for bitter taste receptors in emesis, since many complicating factors might cause divergence between the perceptual signaling of bitterness, and signaling from the same receptors in different physiological sites. For example, both the range of specific TAS2R receptors and their signaling transduction mechanisms might differ between oral and gastric or brain receptors, such that a strong bitter signal does not necessarily produce an analogously strong emetic response. Additional taste receptors beyond the TAS2R set might also be involved, as a possible explanation for emesis induced by salts (also noted above). Thus, as in a great many areas of biology, only a positive read-out here is very useful. (In other words, if a very strong correlation between perceptual bitterness and emetogenicity did exist, it would certainly be consistent with the use of TAS2R receptors in both contexts – but even this, of course, would require more direct information before being proven).
In a general model of signaling which leads to emesis, cells receptive to chemical or other stimuli secrete neurotransmitters upon activation, which in turn activates adjacent neural signaling cells, with resulting common higher-level sensation and behavioral outcomes (nausea and vomiting). By such means, similar effects can be elicited by diverse signals, ranging from a variety of chemicals (from inorganic salts to complex organic compounds), to disagreeable motion stimuli and psychogenic causes. This arrangement has a certain logic to it, since it is unnecessary for the final results (emesis) to qualitatively differ as a consequence of different origins. In this sense, the emetic signaling may be considered convergent from different receptors and different neurotransmitters towards a common neural response. This can be contrasted with the sense of taste, which has both divergent and convergent aspects. With respect to the latter, a wide range of different compounds activate TAS2R bitter receptors, and different sets of compounds (albeit probably less diverse) also converge on activation of sweet receptors. But since the biological functions of bitter and sweet sensing are radically distinct, it would make no sense for their sensory output to converge, and this is obvious from experience. (It is also consistent with recent studies showing divergent brain regions activated by the respective types of taste stimuli).
Since antagonists of relevant neuroreceptors (signal blockers) are effective anti-emetics, it might be expected that corresponding agonists (signal activators) should be strong emetic agents. Such agents would then directly potentiate the signaling neural cells, rather than indirectly via chemoreception (for example) and specific neurotransmitter release. While not false, such reasoning is nonetheless simplistic, since specific neurotransmitters typically bind not just one but a family of receptors, each of which can transduce distinct signaling outcomes. The activity of an agent then is greatly dependent on its specificity for a particular receptor subtype, and the nature of its interaction. Yet there are certainly precedents. As noted above, many anti-emetic drugs target the 5-hydroxytryptamine3 receptor, and an agonist of this same receptor, phenybiguanide, is (among other pharmacological properties) a strong emetogen. Neurotransmission triggered by the peptide mediator cholecystokinin also is involved in emesis, and a particular cholecystokinin variant (CCK-8) is a highly potent emetic in humans, far more so even than the most active cancer cytotoxic drugs.
In this brief overview, the possible role of taste receptors in emesis has been considered, but olfactory receptors might also be implicated in humans. In this case, associated serotonin release again provides a mechanistic convergence with above-noted emetic signaling processes. Certainly some chemicals can invoke a nauseous response simply from exposure to their volatile odors (pyridine is one example that comes to mind, from personal experience).
Non-emetic Mammals and Behavior-driven Xenoprotection
While considering the role of emesis as another level of xenoprotection, one must account for circumstances where it is absent. This is well-demonstrated by rats and mice, whose physiology does not permit the emetic reflex. It has been suggested that these rodents side-step the need for vomiting to some extent through highly sensitive food sampling behavior, and conditioned avoidance of foods which have undesirable effects. Failing this, such animals have been shown to ingest inorganic materials (especially clays), which act as adsorptive detoxifying agents, a behavior termed pica. The interesting parallel between pica and emesis is shown by experiments where rat pica is induced by emetogens and mitigated by anti-emetic drugs. Given these observations, both learned food avoidance and pica emerge as xenoprotective strategies, where higher-level behavior patterns are crucial elements. Conditioned food avoidance in rats has been associated with chemosensing in the Area Postrema, noted above as an important signaling center in emetic animals. Pica has certain conceptual overlap with ‘zoopharmacognosy’ (considered in detail in a previous post, where animals ‘self-medicate’ by consuming environmental bioproducts (principally plant materials) for health-related reasons. Such innate behavior patterns have clear survival value, and would be positively selected on that basis.
Given the proposed increased reliance of rats on primary taste sensing for detecting (and subsequently avoiding) noxious substances, it is of interest to note apparent strong divergences between rat and human bitter taste perception. In particular, the above-mentioned denatonium, exquisitely potent as a bitterant as measured by human sensing, is markedly less so in rats. This is clearly evident through a practical use of denatonium salts as safety additives to rat poisons, in order to help prevent accidental consumption by humans (especially children, whose aversion towards bitterants tends to be stronger than adults). Obviously, this strategy would fail if rats were as sensitive towards the intense bitterness of denatonium as are we humans ourselves. From a rat’s point of view, this might seem unfortunate, but in reality, at least in this specific instance the rat bitter taste responses are much more in tune with the actual toxicity of denatonium. (The human perception of denatonium is far out of proportion to its toxicity, as noted a little further below). It would be interesting to see if the bitter taste perceptual repertoire of rats in general has a better correspondence with actual chemical toxicity than that shown by human responses. This too would be in line with more intense selection pressures on rat bitterant tasting than for primates during the evolutionary past.
In humans, bitterness vs. toxicity can be addressed by comparing thresholds of bitter taste with toxic responses for a wide range of compounds. Assessing the outer limits of bitterness can usually be done (with highly dilute solutions of test compounds), but lethal dosages can only be obtained through accidental poisonings, which obviously are both undesirable and poorly controlled. The situation is almost the opposite with rats, where controlled toxicity testing is a standard laboratory practice, but rats generally have trouble reporting when they first can perceive bitterness in a dilution series of a compound. In lieu of this, minimal chemical concentrations creating aversion can be tested, but this is not the same thing as assaying the lowest concentrations perceivable. More sophisticated testing is possible with in vitro assays for triggering of human vs. rat taste receptors, but this is at the level of primary signaling rather than perceptual awareness. An example of some assembled literature data is shown below, incomplete for the rat, but which partially illustrates the disconnect between human perception and toxic response for denatonium.
Top graph: Human bitterness indices for denatonium, strychnine, and brucine, equivalent to bitterness thresholds for each, normalized to that for quinine (i.e., where quinine bitterness index =1), compared to available information on approximate lethal adult dosages. (Note log scale on X-axis). These are compared with rat laboratory toxicity indices (LD50 values also normalized to that for quinine). The table below shows the original figures for calculating the indices. Note apparent differential susceptibilities for brucine vs. strychnine for humans and rats.
Sources: General: NCBI toxnet; Taste Perception in Humans, from Neuroscience. 2nd edition. Purves D, Augustine GJ, Fitzpatrick D, et al., editors. Sunderland (MA): Sinauer Associates; 2001; The Alkaloids: Chemistry and Physiology, Volume 43. Geoffrey A. Cordell, Richard Helmuth Fred Manske Eds; Academic Press 1993. Also Hansen et al. 1993 (denatonium benzoate). Where appropriate, values shown here have been taken as the midpoints of measured experimental ranges.
In any case, with the inclusion of both conditioned aversion and the pica ‘toxic sequestration’ strategies, we can now define a broader picture of xenoprotection, as depicted below:
Schematic depiction of different levels of xenodefenses. A: Avoidance of noxious materials via aversive taste responses, which includes conditioned avoidance as observed with rats; B: Ejection of poisons via emesis, whether emetic sensing occurs within the gut or via sensing of blood-borne compounds; C: Sequestration of ingested poisons by ingestion of clays or related materials (pica); D: Internal xeno-defenses, as considered previously. For detail, see relevant previously-posted diagram, from the post of 28th March, 2012.
The paradox of bitterness
So far we’ve seen that, in humans at least, bitterness correlates poorly with the potency of chemical emetogenicity. But if we consider the perception of bitterness in its entirety, it becomes clear that it is an imperfect correlate with aversion itself, which is its accepted direct evolutionary rationale. It has been thus noted that complete avoidance of absolutely all bitter substances would have negative nutritional consequences. But if certain environmental compounds are potentially useful, why should these register as bitter in the first place? After all, bitterness is a perception resulting from triggering of specific receptors, not an inherent property of a molecule, so for what reason should a useful molecule be thrown into the same ‘bitter’ grab-bag as for a motley collection of poisons?
One issue in at least a subset of cases could be the existence of similarities in molecular shape between potentially useful compounds and wholly deleterious poisons, such that they are recognized by the same range of TAS2R bitter receptors. While evolution of receptors capable of discriminating even subtle molecular differences is possible in principle, such changes may be constrained in practice by lack of effective selective pressures. But in any case, a better evolutionary result (as dictated by fitness benefits) might simply be more nuanced perception related to the strength of the bitterness signal. A low-level bitter taste (especially when other tastants are also present) might overlap with a pleasure response in some circumstances. So a weakly bitter (but possibly useful) nutrient might then be consciously ingested, but the background bitterness would serve to limit overdosing. Certainly in human adults, a certain amount of bitterness in food or drink is often prized. Among many possible examples, the alkaloid quinine (long employed as a treatment for malaria, as noted in a previous post) is still used as a bitterant in certain drinks, including bitter lemon or tonic water. Given that the preference for this kind of additive is not everyone’s ‘cup of tea’, the variation therein probably arises from a combination of both genetic differences in taste receptor repertoires and positive conditioning towards acceptance (development of an ‘acquired taste’). But there are levels of bitterness beyond which no normal human will voluntarily go. It was for that reason that reference to bitterness as an aversive factor in previous posts often included the adjective ‘intensely’, to distinguish such uniformly negative perceptions from lower-grade bitterness which in some people provides a pleasurable stimulus.
Finally, the ‘bitterness paradox’ prompts a loop-back to the cat and the cigarette which initiated this post. Despite the bitterness and potential aversive power of tobacco, it remains a possibility that it was consumed from an instinctive drive towards ingesting potentially anti-parasitic compounds. If so, it might be case of innate feline zoopharmagnosy. Indeed, there is evidence that leaves of the tobacco plant have certain antiparasitic properties, and cats regularly consume grasses if given the opportunity, which might in part be related to innate ‘self-medication’. Even so, the negatives of cigarette-eating probably outweigh any potential benefit, and such behavior could then be considered a misfiring of an instinctive programming mechanism.
Anyway, to conclude with a biopoly(verse) offering on the poison sequestration theme:
Rats can never show emetic display
So what control keeps rat poisons at bay?
Through a sudden ‘Eureka!’
Comes the answer: It’s pica!
They thus sequester their toxins with clay.
This one hinges on what is apparently a non-standard pronunciation of pica as ‘peeker’. Although some sources do give this as a possible alternative, more usually it is rendered as sounding like ‘piker’. While this is not an Earth-shattering issue for most purposes (‘you say tom-may-to, I say tom-mah-to…’) it does tend to ruin a little verse if one’s pronunciation expectations are violated. So, to accommodate the alternative:
When poisoned, a rat may eat clay
(Emesis is never the way)
Perhaps this is like a
Sick human with pica
In keeping bad toxins at bay.
References & Details
(In order of citation, giving some key references where appropriate, but not an exhaustive coverage of the literature).
‘……this little animal suddenly vomited up half a cigarette…..’ This cat had access (now curtailed) during daylight hours to a frontyard and sidewalk where (regrettably) passers-by sometimes leave cigarette butts, and apparently inadvertently drop whole cigarettes on occasion.
‘…..the well-known drug cisplatin…..is nonetheless notorious for its emetogenic effects…..’ This is graphically described in Siddhartha Mukerjee’s prize-winning cancer book The Emperor of All Maladies (Fourth Estate, 2011) in which it was noted that nursing staff in oncology units nicknamed cisplatin ‘cis-flatten’.
‘…..drugs such as vincristine and bleomycin are very low in inducing this highly unpleasant side-effect [emesis].’ For a review including the of classification of cancer cytotoxic drugs by their emetogenic potential, see Hesketh 2008.
‘…..more recent evidence suggests cooperating regions of the medulla…..’ See Hornby 2001.
‘……TAS2R bitter receptors are not only expressed in taste buds, but at a number of distinct anatomical sites, including the gut and the brain.’ For a recent general perspective on non-perceptual roles of taste receptors, see Trivedi 2012. For a specific view of TAS2Rs in gut sites, see Rozengurt & Sternini 2007; for brain, see Singh et al. 2011.
‘….the most parsimonious pathway to take.’ The notion of biological modularity is encompassed within an interesting paper of Weiss 2005.
‘…..emesis can be induced by sufficient concentrations of simple salts…..’ These include lithium chloride and copper sulfate; see Percie du Sert et al. 2012.
‘……measurement of emetogenic potential itself, and its variation between species.’ An extensive review of the literature on emetic induction with a variety of agents across a range of species was conducted and analyzed by Percie du Sert et al. 2012. Apart from measurement inconsistencies between species, animal assays for emesis can be distressing, so alternatives are being developed. See Robery et al. 2011 for work in this regard with the none-sentient social ameba Dictyostelium.
‘…..denatonium….’ This name comes from its use in rendering alcohol undrinkable, or ‘denatured’. It has widespread application as an aversant added to moderately toxic materials to discourage consumption, especially from children. As a quaternary substituted nitrogen compound, it is usually produced as benzoate or saccharide salts. See Hansen et al. 1993.
‘…..recent studies showing distinct brain regions activated by the respective types of taste stimuli…..’ See Chen et al. 2011.
‘…..phenybiguanide …… an emetogen….’ See Miller et al. 1994.
‘……cholecystokinin variant (CCK-8) is a highly potent emetic in humans….’ Cholecystokinin occurs a 33-mer peptide, but also as shorter truncated forms which retain activity, including the octamer CCK-8. For detail on the emetic properties of CCK-8 in comparison with other agents, see Percie du Sert et al. 2012.
‘……olfactory receptors might also be implicated in humans…..’ See Braun et al. 2007.
‘…..by rats and mice, whose physiology does not permit the emetic reflex….’ For an excellent (and fully referenced) account of this and many related areas (such as pica), see Anne Hanson’s rat behavior site, which also includes a list of known emetic behavior in a wide range of vertebrates.
‘ Conditioned food avoidance in rats…..’ This rat behavior has alternatively been referred to as ‘delayed learning’; also discussed in a previous post concerned with zoopharmacognosy.
‘….a behavior termed pica.’ The extent of pica in rats has been shown to correlate with the degree of emetogenicity of anticancer drugs in humans (Yamamoto et al. 2007). De Jonghe et al. 2009 have also provided evidence that consumption of kaolin (a type of clay) by rats can assist recovery from doses of the anticancer cytotoxic drug cisplatin. Pica has been documented also in emetic animals, and certainly humans are included in this regard. While human consumption of clays or related materials is mostly an abnormal behavior, in certain circumstances it has been proposed to have positive effects associated with correction of micronutrient deficiencies. The increased incidence of pica in pregnant women has been long noted, and this is possibly associated with benefits from protection against toxins. (See Young 2010). It is interesting to compare this with apparent zoopharmacognosy in pregnant lemurs through the consumption of tannin-rich plant materials (noted in a previous post).
‘…..denatonium, exquisitely potent as a bitterant as measured by human sensing, is markedly less so in rats….’ Some results seem to indicate that denatonium salts may be no more bitter to rats than is quinine. See Kaukeinen & Buckle 1992.
‘…..complete avoidance of absolutely all bitter substances would have negative nutritional consequences.’ See commentary of Calloway 2012.
‘……genetic differences in taste receptor repertoires……’ For more on genetic differences in human taste perception, see the previous post. Evidence for positive selection during human evolution of certain bitter taste receptor alleles has been demonstrated; see Soranzo et al. 2005; Li et al. 2011.
‘…..there is evidence that leaves of the tobacco plant have certain antiparasitic properties.’ See Iqbal et al. 2006.
Next Post: September.
This post is essentially the fourth part in continuation of the series ‘Subtle Environmental Poisons and Disease’, but in particular, it extends from the previous post dealing with xenorecognition, or the ability of organisms to recognize and contend with toxic chemicals ingested from the environment. Here we’ll focus on the range of xenobiotics which can be recognized by any of the different systems considered in the last post, which amounts to the biological recognition repertoire towards such chemicals. Is it complete, or can some chemical agents ‘fly under the radar’ and escape detection?
Failure of an organism’s defenses to recognize an incoming foreign compound would imply that its recognition range (or repertoire) was incomplete, such that its ability to ‘see’ certain molecules had one or more ‘holes’. While this is a logical proposition, it should be recalled that there are different levels of xenorecognition, including taste receptors, internal xenosensors, xenoprocessing enzymes, and xeno-exporters (considered in the previous post, see the relevant Figure . So, given that each level uses a different set of receptors, failure of recognition at one level has no necessary bearing on the potential recognition at other levels. The caveat ‘potential’ is used because in any linked functional chain, a breakdown at one stage will compromise later stages. (If an activation series A → B → C → D is absolutely dependent on the sequential input of each member, than obviously a ‘knock-out’ of A, B, or C will prevent the activation of D regardless of its intact state. D would then fail to be triggered unless alternative pathways for its activation existed). Thus, failure to activate a xenosensor may prevent effective upregulation of expression of the appropriate xenoprocessing enzymes (see the relevant Figure from the previous post), even if the latter are well-equipped to deal with the toxic threat. A hole in a repertoire in an ‘upstream’ defense level might therefore cause ineffective responses to a xenobiotic, even if the ‘downstream’ recognition repertoires are perfectly adequate.
On the other hand, some lines of defense might seem decoupled from others. At the frontline of molecular sensing, bitter taste receptors essentially warn ‘don’t eat this!’. Yet if a dangerous substance is eaten anyway, through either misadventure or failure to receive a bitter signal, then surely the next lines of defense would be independent of the breakdown in the first strategy of avoidance. True enough, given the apparent independent nature of taste perception relative to other xenosensing mechanisms, but an interesting wrinkle on this has emerged from observations that the T2R taste receptors (which transmit bitter signals) are also expressed in specific gut cells or airway smooth muscle cells. Obviously this does not involve direct sensory transmission, since we don’t experience taste signals through our intestines, despite many people often having a ‘gut feeling’ about all sorts of important matters. So what do these gut taste receptors do? Although much more work is required, recent results have suggested that they may have a role in limiting the gut-mediated absorption of potentially toxic molecules (defined as ‘bitter’ through their interaction with these receptors). If this is correct, taste receptors may have more than one role in limiting the intake of potentially noxious compounds.
In the context of poisons, it is possible to think of recognition in an inverted sense, since obviously any toxic substance must itself ‘recognize’ at least one type of physiological target, in order to exert any kind of toxic effect in the first place. This viewpoint strains the meaning of molecular recognition beyond its usual ‘recognition’, since at face value it would have to be inclusive of simple chemical reactivity between (for example) a toxic aldehyde group and many different proteins and other biomolecules. Yet it might be useful in passing simply as a backdrop for posing a hypothetical situation where a toxic substance ‘recognizes’ certain target molecules of an organism, but the organism’s defenses are completely blind to it, at all levels of xenorecognition. And taking this further still, what of molecules that do no harm at all, while likewise escaping recognition? Such ‘invisibility’ will be looked at a little more below.
Holes for the Individual, Holes for the Species
A second important issue with respect to holes in any biological receptor repertoire concerns individual variation versus the general repertoire for the species as a whole. Let’s look at this question once again with the first level of defense against xenobiotics, the taste receptors.
For over 80 years, it has been known that genetic differences in humans determine the taste response to certain defined simple chemical substances. For example, a substantial human fraction cannot experience the intensely bitter taste of the compound phenylthiourea (also known as phenylthiocarbamide, or PTC) reported by the remainder. Over the last two decades, much has been learned about taste receptors, and the specific T2R receptor responsible for signaling PTC bitterness has been identified. Seven different alleles of this receptor have been identified, including the non-taster and major taster forms (the latter two being the only alleles occurring with substantial frequency outside sub-Saharan Africa). Interestingly, genetic evidence suggests that the non-taster allele has an ancient provenance, and this persistence has led to the proposal that it may have a selective benefit preserving it within the gene pool. This could have occurred if the non-taster receptor allele lost recognition for PTC but actually gained the ability to recognize and signal bitterness for some other (as yet unknown) naturally occurring compound. If both the taster and non-taster PTC alleles then provided fitness benefits under certain circumstances, both alleles would be preserved by ‘balancing’ natural selection.
Under such circumstances, the collective genotype of a species will be a mosaic of alternative alleles for sensing xenobiotics by taste. But in general, loss of sensory receptors can be a fitness gain if the sensory input no longer exists, or is no longer in any way beneficial, for the species. The classic example in this regard is the loss of sight (and eventually loss of complete eyes) in cave animals which live out their entire life-cycles in darkness. An interesting case in point with respect to chemical sensing is the loss of functional sweet taste receptors in domestic cats, which as obligate carnivores evidently have no need at all to experience sweetness or be attracted to sweet substances. Recently, this observation has been extended to a range of other ‘complete’ carnivores. It is a well-understood evolutionary principle that unnecessary genetic function will tend to be lost, since individuals lacking such gene expression will gain a slight fitness advantage. This may well be at work in the evolution of ‘unsweet’ (though definitely not unsavory) carnivores, but it is possible that other factors which positively select for sweet taste loss also operate in these circumstances.
Yet where a single receptor has a degree of promiscuous ligand recognition, as with the bitter taste receptors, total ablation may always incur a fitness loss. (In a changing environment, some dangerous compounds recognized by such a receptor may no longer be encountered, but other compounds within the receptor’s individual recognition range may still be present). But a functional mutation in a receptor (rather than complete inactivation) might merely alter its specificity range, and could involve both losses and gains, as noted for the PTC story.
So in principle any xenosensory receptor could, through inactivating mutation, give rise to a specific repertoire reduction in an individual. This will constitute a fitness loss, and will be eliminated from naturally breeding populations even if the reduction in fitness is quantitatively very small. Selection in favor of loss (as with sweet taste in carnivores) is unlikely to ever occur with xenosensory receptors in general (including bitter taste receptors) for the reason of recognition promiscuity, but selection maintaining variation in individual receptor repertoires (as with PTC perception) is probably present. It should not be surprising that here we exclude sweet taste reception from xenosensing, since after all, the main targets of sweet perception are simple sugars (in food sources) which are certainly not foreign to any living biosystems. Yet the sweet taste receptor can definitely be triggered by completely non-natural compounds (saccharin, aspartame, and many others) and some intensely sweet natural proteins. This might be framed as ‘xenorecognition’ of a sort, but that is not the primary issue. It is the neurological end-point, the sensory perception at the end of the initial taste receptor triggering, which distinguishes a useful taste-mediated xenoreceptor. Sweet substances (naturally, in primate diets, mainly sugars in fruits) trigger a pleasurable response (‘good – eat me!’), while intensely bitter substances produce an aversive reaction (‘bad – don’t eat!) In fact, if a natural toxic substance elicited a sweet response, an animal might be stimulated to consume more of it, to its great detriment. And that of course would be completely contrary to everything that an effective xeno-response system should provide. Clearly, natural selection would rapidly change sweet taste receptors which acted in this way towards compounds in an animal’s normal environment, but no such selective pressures exist for substances which are never likely to be naturally encountered. An example of such an ‘unnatural’ toxic but sweet substance is ethylene glycol, widely used as an antifreeze. Poisonings of dogs and young children have been attributed to its sweetness, although hard evidence for this seems to be lacking. It is indisputable, though, that ethylene glycol is very toxic (through its metabolic products) and elicits a sweet taste. At very least, the perception of ethylene glycol sweetness would presumably not deter an animal with functioning sweet taste receptors from imbibing it, in the same way that a strongly bitter substance would.
While ‘holes’ in the xenobiotic recognition repertoire of a species as a whole could in principle occur at any level of xenosensing and processing (as noted above; see a Figure from the previous post), deficits in taste warning signals are relatively easy to define. So let’s consider an example of a general deficit of this kind towards an interesting group of highly toxic compounds.
Xeno-myopia to xeno-blindness?
Certain tropical marine fish can be source of a potent group of toxic compounds which upon consumption cause a condition known as ciguatera. The toxic principle involved, ciguatoxin, is a complex polyether chemically related to a number of other known marine poisons, including brevetoxin, palytoxin, and maitotoxin. (The latter is of interest as the largest natural bioproduct known, with a molecular weight of 3425 Daltons). Ciguatoxin itself exists as several chemical variants based on a common polyether skeleton, of molecular weights around 1000 –1100 Daltons. Polyether toxins are accumulated in fish through the food chain, with the original source identified as certain species of the marine eukaryotic single-celled protists known as dinoflagellates. (Although the ultimate synthetic machinery for synthesis of these large and complex molecules may come from symbiotic bacteria associated with specific dinoflagellate species).
Structure of a representative ciguatoxin, ciguatoxin-1. Letters A-M correspond to the nomenclature convention for each cyclic ether ring.
Unlike a great variety of plant-derived toxic alkaloids and other noxious molecular agents, ciguatoxin is tasteless, and thus fails to bind and activate any of the bitter taste receptors. But of course, failure to trigger the first line of defense has no bearing on what a molecule may do once ingested. The very high toxicity of ciguatoxin obviously demonstrates that it must very significantly interact with at least one physiological target. (In fact, it is neurotoxic, perturbing the activity of voltage-gated sodium and potassium channels which regulate nerve electrochemical transmission). While bypassing the frontline of taste, how is ciguatoxin ‘seen’ by the remainder of the xenosensory system? The metabolism of this compound (and related molecules) appears slow in experimental animals, with much ciguatoxin excreted in an unmodified state. Symptoms of ciguatera toxicity in humans can persist for months or even years following exposure, consistent with slow metabolic turn-over. On the other hand, evidence has been produced indicating that exposure of mice to ciguatoxin is associated with transcriptional activation of Phase I and II xenobiotic responses (phases of the latter responses were considered in the previous post).
In combination, these data would suggest that while ciguatoxin (and in turn other polyether marine toxins) can trigger xenobiotic sensors after its ingestion, its processing and removal from the body is not highly efficient. Certainly its lipid solubility may delay its removal, but that alone would not account for a very low level of metabolic processing. Given the focus of this post on xenorecognition repertoires, what is the limiting case of poor recognition of a toxic agent? In other words, if failure to taste ciguatoxin and its ensuing poor metabolism is ‘xeno-myopia’, is there any precedent for ‘xeno-blindness’, where a toxic agent creates havoc without any recognition or metabolic processing? Or would this be virtually a contradiction in terms? Given that xenorecognition operates by means of a specific set of receptors of limited number (albeit with considerable promiscuity) and a vast number of potential targets for a toxin exist in vivo, it might not seem an impossible prospect. Yet there seems to be no precedent for this. It is likely that certain compounds are indeed poor substrates for all metabolic processing enzymes (and thus slowly metabolized), but ‘poor’ is not at all the same as ‘invisible’. It may be the case that virtually all small molecules offer a weak binding site fit for the promiscuous pockets of at least some xenosensors and processing enzymes, allowing a slow level of metabolic turnover. Alternatively, ‘non-specific’ attack by reactive oxygen species mught be a factor, noted again below.
In a xenobiotic context, the biological rationale for promiscuous recognition in the first place is to ensure that a limited number of receptors can cater for recognition of a much larger range of potential targets. But as with any biological issue. this question must also be considered from the perspective of evolutionary selective pressures. Evolutionarily speaking, the human species would have had little if any exposure to ciguatoxin until relatively very recent times, and even now, its impact is restricted to specific geographical areas. A maritime fish-eating species in tropical areas which was regularly threatened by ciguatera poisoning would be under a strong selective pressure to evolve a better xenorecognition system towards polyether toxins, including primary aversive taste sensitivity. Alternatively, evolution of means for very efficiently detoxifying or internally sequestering polyether toxins would allow otherwise contaminated marine foods to still serve as useful nutrient sources. (It is possible that some tropical fish have the latter kind of protection, since they can accumulate high levels of ciguatoxin without apparent ill-effects). Sometimes a small change in the amino acid sequence of a target molecule for a poison can make a very large difference in an agent’s toxicity. For example, consider the action of the insecticide DDT, which (in common with many of the polyether marine toxins) targets the neural voltage-gated sodium channel. It appears that only three key amino acid residue differences in the human vs, insect sodium channel determine the differential toxicity of DDT to insects. Selective pressures from environmental toxins could thus drive sequence changes in targets such as this voltage-gated channel, such that function is preserved but susceptibility to the toxin is diminished.
Xenosensing vs. adaptive immunity
While thinking about evolutionary selective pressures, it’s interesting to compare recognition of xenobiotics with the adaptive immune system. The latter, of course, exists to deal with a gamut of pathogens which otherwise would take over a host and replicate freely at the host’s expense. Internal surveillance against transformed cells (‘altered self’) to prevent tumor formation is another role for this advanced recognition system.
It is easy to conceive of ‘adaptive xenosensing’, where a novel (and poorly recognized) toxic environmental compound induces selective processes from populations of variant receptors on xenosensory cells, such that variants with greater affinity are selected and amplified. The power of this Darwinian process in action has been shown by the successful artificial generation of antibodies to ciguatoxin itself. This would not occur under natural circumstances, since it requires artificial conjugation of fragments of ciguatoxin to large protein carrier molecules, such that the toxin fragments act as immunological haptens. Nevertheless, this demonstrates that the adaptive immune system can indeed select for antibodies with the correct binding specificity against a toxic polyether molecule.
Why then does this not occur with xenosensing, to overcome poor initial responses to novel xenobiotics? (Here we return to this question as initially noted in the previous post). Once again, we must look to evolutionary explanations. Evidently the existing xenorecognition systems of vertebrates is selectively ‘good enough’ despite theoretical room for improvement, where the latter would require extensive investments in new developmental pathways with their consequent energetic demands. Above all, even the most poorly-metabolized compounds do not replicate, and (provided they are present in sublethal amounts) are gradually removed from organisms. Pathogenic and invasive organisms, on the other hand, will indeed replicate, and present an acute problem demanding adaptive solutions. And this is what evolution has bequeathed us: A xenorecognition system which is static in the lifetime of an individual, but variable through selective pressures over evolutionary time; and an immune system which is dynamically adaptive in time-frames much shorter than an individual life-span.
Bioorthogonality and Xenobiotics
We have considered ‘xeno-blindness’ as a hypothetical situation where a toxic compound elicited no response from an organism which had ingested it. (Such a molecule would ‘recognize’ one or more target molecules anywhere with the bounds of the host’s biosystem (and thereby manifest toxicity), but the foreign compound would fail to be recognized by any of the host’s xenodefenses, at any level). What if non-recognition is taken a step further still, such that the xenobiotic is neither toxic nor recognized? In such circumstances, we would be reminded of the notion of orthogonality, as raised in a previous post with respect to ‘weird life’. Our hypothetical compound which is completely ‘invisible’ (neither toxic nor xeno-recognized) would thus be considered bioorthogonal. Toxicity, of course, is the reason many compounds come to the attention of science in the first place. If the polyether metabolites of dinoflagellates were completely non-toxic, they would likely have escaped detection, given their low absolute amounts present in most marine samples. (Of course, they would still not be chemically ‘invisible’, and would eventually be picked up by modern sensitive metabolomic profiling – but this would be much delayed relative to the ‘flagging’ of their presence through their toxic actions).
A first thing to note in this regard is that bioorthogonality can be a relative concept. Consider that a compound could be ‘invisible’ in a specific cell type in culture, yet actively metabolized by cytochrome P450 enzymes expressed in liver cells in the whole organism from which the cultured cells were derived. In such circumstances, bioorthogonality might be assigned in the first case, but certainly not the latter. Yet even if bioorthogonality (or something approaching it) exists for an entire mammalian organism, this need not apply to the biosphere as a whole. Bacteria, after all, are the consummate masters of biochemical transformations, and can modify an astonishing range of compounds. Included among these are natural polyether toxins themselves, and a great many non-natural artificial compounds. A good case study of the latter phenomenon is the targeting of paraoxon (a toxic metabolite of the organophosphorus insecticide parathion) by the enzyme bacterial phosphotriesterase. This activity is believed to have evolved only within the last few decades, when paraoxon has become present in the environment, since no natural substrate for this enzyme is known.
It is thus not difficult to see that bioorthogonality can exist in discrete compartments (as in the case of a single cell type in culture noted above), but it is much more problematic to accept that any novel molecule would evade recognition within the entire biosphere. Such a hypothetical molecule could even be seen as a kind of orthogonal ‘dark matter’, but its existence would be very dubious for similar reasons to the possible existence of truly ‘orthogonal life’ on this planet intersecting with conventional life (as noted in a previous post). Certainly new artificial molecules released into the environment (such as DDT and other organochlorine compounds) persist for long periods, but again this is slow processing rather than total non-recognition, given that organisms capable of metabolizing such products are not evenly environmentally distributed. And, as exemplified by the above paraoxon example, bacteria can evolve efficient enzymatic recognition and processing extremely quickly, so any period of supposed ‘orthogonality’ would likely be short in any case.
It might be thought that any molecular entity even approaching the notion of bioorthogonality should exhibit chemical stability and low reactivity. At one level there would be seem to be some value in such a proposition, given the environmental and chemical stability of compounds such as fluorinated hydrocarbons (especially polymers thereof). But at another level, this cannot be correct. Certain heavily fluorinated compounds (including the simple molecule carbon tetrafluoride, CF4, but more commonly derivatives of methyl ethyl ether) have the property of acting as general anesthetics. And even the ultimate in non-reactivity, the inert gases, can induce such anesthesia. The inert (or ‘noble’) gas xenon has often been cited as a near-ideal anesthetic, with only its considerable expense limiting its much more widespread use. (It is a little ironic that the name ‘xenon’ has the same etymological route meaning ‘stranger’ as seen in all the ‘xeno-‘ words in this post). Xenon can in fact form a limited number of chemical compounds with highly reactive partners under specific circumstances, but there is no question of it forming any covalent bonds under physiological conditions.
Although there are vast numbers of artificial and naturally-derived drugs which bind non-covalently to their specific targets (and thereby act as inhibitors or other functional modulators), all of these are subject to some level of recognition by other proteins within the xenosensing system, followed by subsequent xenoprocessing involving covalent modification. This, of course, is the underlying basis of all drug metabolic studies. As we have seen, some xenobiotics are metabolized at a very slow rate. In this post, complex polyethers are the key exemplars, but dioxin (TCDD) is another important case in point, as discussed in the previous post. In neither case, however, can slowness of metabolism be in any way equated with complete invisibility to xenoprocessing mechanisms. Thus, while the mode of action of drugs may very often be via non-covalent interactions, drug processing (the xenorecognition system) involves at least a low level of covalent modification. As noted above, it could be argued in principle that very slow metabolic attack on highly resistant xenobiotics might proceed through the action of reactive oxygen species, whether deriving from cytochrome P450 activity (or other processing enzymes) or more non-specifically. If the latter, the authenticity of the ‘xenorecognition’ might be called into question, if bona fide ligand-receptor interactions (even at a high level of promiscuity) were not involved. Even if this should be the case, the reactive oxygen species nevertheless derive from host metabolism, and so even very slow attack on xenobiotics from this source still would result in a failure of true bioorthogonality.
But normal xenoprocessing (or any non-specific oxidation) cannot be relevant in any way to xenon, since xenon will never undergo any covalent reactions in vivo. And yet xenon surely is far from bioorthogonal, given its dramatic ability to modulate conscious experience in vertebrate organisms. These observations indicate that bioorthogonality on the part of any xenobiotic factors cannot be described simply by a complete lack of covalent reactivity at all biosystem levels. (Note we cannot refer to ‘compounds’ or ‘molecules’ when including monatomic inert gases such as xenon). So while hypothetical bioorthogonality would necessarily involve a lack of reactivity, it would have to be defined as functional reactivity of any kind, whether covalent or non-covalent, and at any physiological level.
There’s an important area relevant to bioorthogonality already alluded to in a previous post , which concerns artificial development of chemical reactants and reaction process which themselves are orthogonal to biological systems in which they take place. But to do justice to it, that will have to wait until a later post.
So, to conclude with one of the subthemes used here:
One should note that ‘xeno’ means stranger
And possibly, terrible danger
A harsh bitter taste
Is no form of waste
It serves as a guardian ranger
References & Details
(In order of citation, giving some key references where appropriate, but not an exhaustive coverage of the literature).
‘…..the observation that certain taste receptors….. are also expressed in specific gut cells…’ See a review by Rozengurt & Sternini 2007. ‘……recent results have suggested that they may have a role in limiting the gut-mediated absorption of potentially toxic molecules….’ See Jeon et al. 2011. / ‘…..or airway smooth muscle cells….’ See Deshpande et al. 2010.
‘ For over 80 years, it has been known that genetic differences in humans determine the taste response….’ The phenomenon of ‘taste-blindness’ to phenylthiourea (phenylthiocarbamide) was first reported in 1931; see a review by Drayna 2005.
‘…..the specific T2R receptor responsible for signaling PTC bitterness has been identified…’ For details of this receptor (TAS2R38), see Bufe et al. 2005.
‘……this persistence [of the non-taster PTC allele] has led to the proposal that it may have a selective benefit preserving it within the gene pool. ‘ See Kim & Drayna 2005.
‘……the loss of functional sweet taste receptors in domestic cats……’ / ‘ Recently, this observation has been extended to a range of other ‘complete’ carnivores. It is a well-understood evolutionary principle that unnecessary genetic function will tend to be lost..….it is possible that other factors which positively select for sweet taste loss also operate in these circumstances.’ See Jiang et al. 2012 for details on carnivore loss of sweet taste. In general, an often-noted example of abrogation of unnecessary gene function is loss of the ability to synthesize vitamin C (ascorbate) by primates, owing to their fruit diets containing plentiful supplies of the vitamin.
‘ the sweet taste receptor can be definitely be triggered by …….some intensely sweet natural proteins.’ Proteins triggering the sweet taste receptor bind to a different site to that used by low-molecular saccharide sweet substances. See De Simone et al. 2006.
‘ Poisonings of dogs and young children have been attributed to its [ethylene glycol’s] sweetness, although hard evidence for this seems to be lacking…’ See studies in dogs by Marshall & Doty 1990; Doty et al. 2006. Whether or not at least some dogs are prompted to consume ethylene glycol through its taste, non-sweet tasting cats and other obligate carnivores would presumably be completely resistant to this effect. (Note that dogs, like bears, are not in fact ‘complete’ carnivores, and can subsist on other foods).
‘…..the original source identified as the marine eukaryotic single-celled protists known as dinoflagellates….’ For some basic background on dinoflagellates, and especially their unusual genomics, see Lin 2011; Wisecaver & Hackett 2011.
‘….ciguatoxin is tasteless….’ See Park 1994; Lehane 2000. ‘Tastelessness’ here refers to the highest concentrations of polyether marine toxins found in contaminated fish, which are clearly sufficient to intoxicate a human or other mammal. Thus, even if artificially massive concentrations of ciguatoxin (far in excess to that encountered in contaminated natural sources) stimulated a taste receptor signal, such a response would be clearly far too insensitive to be useful as a primary anti-toxic avoidance screen. So tastelessness here is a functional definition, even if not necessarily absolute.
Another intriguing observation in this respect is that a commonly-reported symptom of ciguatera intoxication is distortion of taste perception (dysgeusia), such as experiencing a metallic taste in the mouth. Recent evidence suggest that this arises from ciguatoxin (and related polyethers) interfering with voltage-gated ion channels in taste receptor cells. These channels are associated with neurotransduction of taste receptor signals, but must be distinguished from the taste receptors themselves (which are members of the very large G Protein-Coupled Receptor family). See Ghiaroni et al. 2005; Ghiaroni et al. 2006.It thus seems ironic that polyether marine toxins fail to effectively activate taste receptors in the first place, yet perturb their function once intoxication has occurred.
‘…..ciguatera toxicity in humans can persist ……consistent with slow metabolic turn-over…’ See Lehane 2000; Chan & Kwok 2001; Bottein et al. 2011. Note that (without further information) this is by no means proof of actual persistence of the original toxic molecule, given the formal possibility of ‘hit-and-run’ ongoing pathological effects, as noted for the neurotoxic chemical MPTP in a previous post.
‘….exposure of mice to ciguatoxin is associated with transcriptional activation of Phase I and II xenobiotic responses….’ See Morey et al. 2008.
‘ A maritime fish-eating species in tropical areas which was regularly threatened by ciguatera poisoning would be under a strong selective pressure to evolve a better xenorecognition system….’ This specifically refers to land-dwelling or semi-aquatic animals rather than those which are fully marine. ‘Red tides’ of dinoflagellate blooms are often associated with massive fish kills, but in such cases it appears to be from release of toxins directly into local marine environments. Where this applies, improved xenorecognition could not promote avoidance. Even if protective mechanisms have evolved in an animal towards a toxin, massive transient exposures may still have lethal consequences.
‘….. possible that some tropical fish have the latter kind of protection [detoxifying or internally sequestering polyether toxins]….’ In this regard, it is interesting to note that a natural inhibitor of the toxic effects of at least one polyether marine product (bevetoxin) has been isolated, albeit in this case from dinoflagellates themselves. (Production of the inhibitor as well as the toxin in varying proportions by dinoflagellates may contribute to the variable magnitudes of fish kills during ‘red tides’). See Bourdelais et al. 2004.
‘….only a three amino acid residue difference in the human vs, insect sodium channel is the determinant of the differential toxicity of DDT….’ See O’Reilly et al. 2006.
‘ Why then does this [development of adaptive recognition systems] not occur with xenosensing, to overcome poor initial responses to novel xenobiotics….’ A similar scenario was raised in Searching for Molecular Solutions (Ch. 2, Molecular Sensing / Multirecognition. ) with respect to chemical sensing of odorants.
‘ Pathogenic and invasive organisms, on the other hand, will indeed replicate, and present an acute problem demanding adaptive solutions. ‘ A seeming paradox in this regard is the lack of adaptive immune systems in invertebrates, which are certainly just as prone to microbial assaults. One answer may lie in their possession of highly diverse innate immune receptors, and this is a topic for a later post.
‘ Bacteria, after all, are the consummate masters of biochemical transformations …..Included among these are polyether toxins……’ See Shetty et al. 2010.
‘…..the inert gases, can induce such anesthesia…..’ Xenon, krypton, and argon have anesthetic properties, but xenon is the most useful in having such effects under normal conditions of pressure. See Kennedy et al. 1992. Although the mechanism of inert gas anesthesia is uncertain (as are mechanisms of anesthesia in general), xenon has long been known to be capable of binding to hydrophobic pockets in proteins (See Prangé et al. 1998), which might be associated in some way with its anesthetic activity.
‘ Xenon can in fact form a limited number of chemical compounds with highly reactive partners under specific circumstances….’ The first xenon compound (xenon hexafluoroplatinate; also the first compound of any of the noble gases) was prepared by Neil Bartlett in 1962. For a review of this and progress in inert gas chemistry in general, See R. B. Gerber’s very useful article from the Israeli Chemical Society site.
Next post: July.
This post is the third in a series (Subtle Environmental Poisons and Disease) dealing with environmental toxic influences, particularly those with long-term ‘subtle’ action. The major subtheme here is the role of individual variation in determining the outcome of a toxic challenge, with particular emphasis on how (in some cases) an organism’s anti-toxic protective mechanism may actually be a source of problems. An implicit requirement underlying both of these topics is the existence of specialized systems for recognizing potentially dangerous non-self molecules from the environment. These themes accordingly center around xenorecognition, or the ability to recognize foreign chemical intrusion at the molecular level. Framing the matter in this manner may bring to mind the immune system, and indeed an analogy can be made between responses to chemical intrusion and innate immune systems tuned by evolution for signaling responses to the presence of dangerous pathogenic organisms. Although such parallels should not be overstated, both systems serve to maintain homeostasis for complex multi-cellular organisms.
Contending with a Sea of Potential Poisons
The exquisite complexity of living biosystems dictates their sensitivity to a variety of negative perturbations, which can range across a spectrum of extraneous physical, chemical and biological influences. Parasitic replicating systems are likely to have been a serious problem even at the earliest stages of molecular evolution, and defenses against them likewise must have evolved at equally early times. It is precisely the ability of invasive biosystems to replicate at a host organism’s expense which renders such parasites a serious threat. When replication per se is combined with the frequent ability of biological invaders to rapidly evolve (and alter the sets of nucleic acids and proteins by which they may be recognized), a potent selective force is generated for the evolutionary derivation of increasingly complex counter-attacks. These we refer to as immune systems.
Yet there are a great many environmental potential threats which do not directly replicate, and these may originate from both biological or non-living sources. For the latter, we could think of toxic levels of metal ions or other soluble inorganic natural chemicals (such as dangerous oxygen radicals), or natural sources of dangerous gases (such as from volcanic effluxes). Across the field of biology in general, there is a huge range of natural poisons enzymatically synthesized from bacterial, fungal, plant or animal sources. As enzyme systems evolve, so too their range of natural products will change. Given these factors, the sheer numbers of potentially toxic biocompounds will vary greatly between different environments, and the specific nature of such molecules in any setting will inevitably alter over time.
What is needed in order to deal with this? One might consider a system where each potential threat was countered and nullified by a specific recognition molecule, but this proposal soon looks quite impractical if a very large number of potential molecular agents are possible. Also, as just noted, any such agents are not fixed immutably – and even a small chemical alteration might stymie effective recognition by a specifically-directed receptor. Immune systems facing challenges from infectious biological replicators have used a variety of strategies, culminating with adaptive immunity where complex mechanisms are used to generate receptors which are indeed ‘tailored’ to a new and novel threat. This level of sophistication has never evolved for dealing with non-replicating chemical poisons, an issue to be revisited in a subsequent post.
How then is defense against noxious chemicals obtained? While there is no comparable specificity to that seen with antibodies generated by adaptive immune systems, multiple lines of defense have evolved to counter specific poisonous threats. Dangerous levels of certain metal ions, for example, can be countered through the actions of proteins called metallothioneins, which bind and sequester such metals and thereby ameliorate their toxic effects. Strongly oxidizing chemical groups (whether generated through normal metabolic activities or acquired from the environment) are routinely mopped up by a variety of endogenous antioxidants, among which the versatile metallothioneins are included. But of particular interest in this context is the huge diversity of foreign organic small molecules which might potentially impact upon any organism’s normal biological operation – how can these be effectively neutralized?
Before looking at this question, it would be useful to consider a little semantics revolving around what will become a key word here: ‘xenobiotic’. This word, literally ‘stranger to life’ is often used in two distinct, though overlapping, senses. Firstly, it can refer to any molecule which is foreign to the physiological functioning of an organism in question. In other words, in this sense a ‘xenobiotic’ denotes any molecular entity which is not synthesized by the organism itself, or which is not normally used by it as a food, nutrient cofactor, or for any other function. As such, it covers the whole gamut of natural products deriving from the collective biosphere which are foreign to the normal make-up or functioning of a specific organism. Clearly, though, this definition would also encompass all artificial molecular entities, all molecules whose origins derive entirely from human ingenuity. And here is the second sense of ‘xenobiotic’ arises, since in many cases this word is used to refer (more or less) exclusively to artificial compounds, especially where they have become environmental contaminants. Although this framing of the word is more restrictive, it would seem actually closer to its literal meaning as foreign to life in general, thus indicating something truly new under the sun. Nevertheless, for the present purposes it will be used in the first sense, which embraces natural ‘foreignness’, as well as artificial sources of molecular ‘non-self’. After all, no-one suggests that defenses against foreign chemical agents evolved to deal with the advance possibility of non-natural compounds emerging in the environment!
Levels of xeno-defense
Since it is impossible for an organism to avoid taking in physical materials from its environment, the potential for exposure to noxious chemicals will always exist. But equally obviously, the risks from ingesting nutrients are not evenly distributed across the environment as a whole, and avoiding foci of possible danger is of clear value. This is simply in accord with the old dictum, ‘prevention is better than cure’, although of course blindly applied by favorable evolutionary selective pressures. For mobile animals, chemosensory perception has an important role in screening out noxious nutrient sources. Potentially dangerous decaying foods can warn via their odors, and many food sources (especially plant-derived) bearing toxic secondary metabolites signal the potential threat through strongly bitter and aversive tastes. Since most toxic plant alkaloids are not volatile, taste aversion is likely to be the most important means of primary screening of potentially noxious environmental compounds.
This then returns us to the above general question regarding how an organism can handle the multiplicity and diversity of potential molecular threats, by asking how the front-line taste-based screening can work. It is now known that the perception of bitterness is mediated by the ‘Type 2’ taste receptors (TAS2R), encoded by about 30 distinct genes in the human genome. Obviously this is massively insufficient to cover the scope of potentially noxious compounds, if each receptor was specific to a given target structure. While much more information is needed, it appears that while different TAS2R receptors respond to different bitter tastants, the receptors as a whole are not dedicated to unique structures. A key descriptive word in this context which will apply at other stages of xenorecognition is ‘promiscuity’, or relaxed discrimination between different molecular structures serving as recognition targets. Presumably, the promiscuity shown by the TAS2R receptors is sufficient for perception of a wide enough range of structures to be biologically useful as a front-line gating of potential poisons. (Each receptor is likely to have its own pattern of structural recognition, such that collectively the receptors cover a sufficiently adequate area of chemical space).
Clearly, though, other lines of defense against noxious molecules will be needed. While obviously biologically useful, gating against primary ingestion of poisons could not provide any guarantees. Toxic products might fail to register bitterness, be so potent as to be still dangerous when below the threshold of taste awareness, or be masked by other tastants present in the entire ingested material. Or poisons might be inadvertently taken in by non-oral routes, thereby circumventing anything that TAS2R signaling could achieve.
The conventional view of the processing of ingested drugs (meaning essentially the same as natural or artificial xenobiotics in this context) is divided into three metabolic phases, involving various Xenobiotic Metabolizing Enzymes (XMEs) and other factors. In Phase I metabolism, xenobiotics are acted on by enzymes (particularly those of the cytochrome P450 family) which incorporate or expose chemical functional groups, by redox or hydrolytic reactions. In Phase II, the initial processing facilitates the transfer of natural biological groups onto the xenobiotic to form various conjugates. Phase III reactions (those occurring post-conjugate formation) can involve further processing by Phase I enzymes, and often are taken to include the export of modified xenobiotics across cell membranes by various efflux systems.
Enzymes modifying ingested xenobiotics must clearly be capable of recognizing their molecular structures, although (as seen with the TAS2Rs above) not necessarily with high specificity. In relatively recent times, it has become apparent that before the onset of the Phase I metabolic processing, the primary recognition event involves key proteins generally termed ‘xenosensors’. Many of these had been previously discovered and defined as part of a nuclear receptor superfamily, but initially termed ‘orphans’ owing to their uncharacterized ligand-binding functions. Some such proteins, however, were later found to bind xenobiotic compounds, an interaction which in turn activates these nuclear receptors as transcription factors regulating the expression of key downstream Phase I-III proteins. (This new knowledge accordingly released these nuclear receptors from their ‘orphan status’). Among these xenosensors, the pregnane X receptor (PXR) and constitutive androstane receptor (CAR) have received much attention, but various other xenosensing nuclear receptors exist.
Another important xenosensor is the aryl hydrocabon receptor (ArHR), which is initially located cytoplasmically in distinction to the above nuclear receptors, although upon binding one of its target ligands, the ArHR is then translocated to the nucleus for regulation of its specific transcriptional targets. The figure A below depicts both primary xenosensing and the above-noted three Phases of xeno-processing:
Figure A. Depiction of cellular recognition for a hydrophobic xenobiotic (able to directly traverse cell membranes). Primary xenosensing, and the three Phases of metabolic processing are depicted, culminating in the export of modified compound. This simplified depiction does not attempt to show subcellular locations of the various metabolic components. The xenotransporters can act on conjugates between modified xenobiotics and ubiquitous factors such as the peptide glutathione, but in some cases xenobiotics may be directly exported. (These alternatives represented by the xenobiotic in red).
Figure B. Depiction of xenobiotic recognition for protective purposes as a process where the front-line is held by odorant and (particularly) taste receptors.
The manner in which xenobiotics are metabolized is crucial to the outcome of the exposure of an animal to the alien molecule(s). All of the players in xenobiotic responses and handling can vary genetically, and this can be a major influence on outcomes for both the short and long-term.
Genetics and Poisons
In passing, we can note that genetic variation in response to chemical challenges is not limited to organic compounds. In the first post of this ‘subtle poison’ series, the deleterious effects of both heavy metals and mineral fibers was noted. In both of these cases, genetic influences on host responses have been recorded, although more data is needed to fully characterize the relevant genes involved in each area.
The role of individual variation in xenobiotic-metabolizing enzymes, and in turn variation in the way such molecules are processed between different individuals, has become of great interest in recent times. For the pharmaceutical industry and medicine, clearly an ability to accurately define how a drug will behave in a specific individual would be immensely valuable, and much useful information has been gained in specific cases. In particular, studying differences in cytochrome P450 family allelic enzyme activity levels has been a profitable undertaking, with clinical applicability.
But for the present purposes, let’s look at particular aspect of the innate genetically encoded anti-xenobiotic responses, where the response itself is responsible (wholly or in part) for the resulting toxic effects.
An Autoimmune Analogy
Earlier in this post, the response against xenobiotics was contrasted with immune systems evolutionarily selected for countering infectious replicators. A fundamental difference between the vertebrate adaptive immune system and responses to xenobiotic threats is the restriction of the latter to sets of germline genes. In other words, while adaptive immune systems can generate and select novel receptors for countering previously unanticipated pathogens, the xenobiotic ‘immune system’ is expressed from innate sets of genomic coding sequences. In this respect, responses against ingested xenobiotics have more in common with the innate immune systems (of either vertebrates or invertebrates), where gene products recognizing specific microbial ‘danger signals’ are encoded in germline genomes.
The adaptive immune system’s greatest strength, the generation of novel receptors to meet novel threats, is also a potential source of harm through the unwanted generation of self-reactive immune specificities. Even though evolution has developed extremely sophisticated ways of avoiding this, adaptive immune system autoimmunity presents an ongoing clinical burden. It might be thought that any innate defense system would bypass this problem, since any innately encoded proteins or nucleic acids recognizing self should be strongly selected against through evolution. Yet it is now known that certain aspects of innate immunity can indeed help trigger autoimmunity under specific circumstances.
Responses mediated by xenobiotic sensors and processors can also directly mediate deleterious results, in contradistinction to the ‘proper’ physiological roles. Although there is no direct parallel with innate immunity to be made, certainly one can view such inadvertently self-destructive responses as ‘autoimmune’ in a broad analogous sense, if one likewise considers xenobiotic processing as a special kind of innate (and usually protective) immunity in its own right.
Self-activation of xenobiotic deleterious effects
There is more than one pathway by which innate mechanisms can produce deleterious reactions to xenobiotic challenge. There is much precedent for the toxicity of a primary xenobiotic not being manifested until modifications in vivo by Phase I metabolic enzymes are produced. As a case in point, note that a previous post in this series looked at the generation of a neurological condition recapitulating Parkinson’s disease by the compound MPTP. Here the initial molecule was not the direct villain, but rather an MPTP derivative (MPP+) produced by the action of monoamine oxidase (MAO) enzymes. (A striking confirmation of this in animal studies is the blocking by MAO inhibitors of neuronal destruction otherwise mediated through MPTP administration). Also, a previously noted environmental xenobiotic chemical found in soot and coal tar, benzo[a]pyrene is modified by Cytochrome P450 enzymes to an active epoxide derivative, which directly forms DNA adducts ultimately contributing to its carcinogenicity. In these circumstances the Phase I enzymes therefore actually aid and abet the carcinogenic process.
Detoxification may require a sequence of enzymatic modifications upon an initial xenobiotic exposure. During this process, an elevated toxicity of intermediate derivatives may be ‘acceptable’ if their presence is transient and the overall chain of modifications leads to complete elimination of the initial toxic problem. Genetic variations in the activities of key enzymes which retard the removal of highly toxic intermediates could clearly result in significant problems. A classic exemplar of these processing factors is the metabolism of ethyl alcohol (ethanol). This widely popular compound is initially converted into acetaldehyde by alcohol dehydrogenase (and also the Cytochrome P450 member CYP2E1), then into acetate by aldehyde dehydrogenase, and ultimately into carbon dioxide and water. These aspects of ethanol metabolism are common to all humans, which means that anyone imbibing alcoholic beverages is exposed not only to ethanol itself, but also the same metabolic products. The most potentially dangerous of these is acetaldehyde, a known carcinogen. But since quite benign acetate results from the processing of acetaldehyde itself, the derivation of the latter from ethanol is transitory. But it may be a crucial issue just how transitory that effect should happen to be.
It would seem obvious enough that a variant of aldehyde dehydrogenase (ALDH2) with reduced activity would allow the build-up of acetaldehyde after ethanol intake, and this is indeed the case for a significant fraction of humanity (mainly in East Asia) bearing an allelic variant of this enzyme (ALDH2*2) with very low activity. Blocking removal of acetaldehyde renders the effects of alcohol unpleasant, a feature which can be produced in anyone by means of drugs inhibiting the ALDH2 enzyme. (This has been the basis of one type of treatment for alcoholism). But increased levels of acetaldehyde can also result if the catalytic rate of alcohol dehydrogenase (ADH) itself is higher than the usual baseline, and this is seen with the ADH allele 1C*2. In such circumstances, the elevated rate of acetaldehyde production (relative to its enzymatic removal) increases its transient concentration in comparison to that seen with normal ADH.
By whatever means increased levels of acetaldehyde may be produced, the same trend towards increased carcinogenicity results, and evidence for the role of acetaldehyde in ethanol-induced cancers is well-characterized. Under natural circumstances, alcohol may be ingested in relatively small amounts sporadically (think of fermented fruits), but high-level or prolonged exposure in humans is almost always through voluntary actions. So alcohol could be viewed as having ‘autotoxic’ effects involving both conscious-level decision-making, and also at the molecular level from an individual’s own metabolic processing enzymes. Acetaldehyde toxicity resulting from ethanol intake can also have both immediate effects (sickness, flushing) and more subtle long-term negative consequences (induction of tumors). And (to invoke the analogy with autoimmunity), some individuals are highly sensitive to the effects of acetaldehyde produced from ethanol directly as a result of their genetic backgrounds (as with the ALDH2*2 or ADH 1C*2 alleles).
Pursuing this theme a little further, it’s interesting to consider (as mentioned above in passing) that acetaldehyde can also be produced from ethanol through the Phase I metabolic enzyme CYP2E1. Normally, though, the contribution of CYP2E1 is small except in the case of heavy habitual drinkers, where the enzyme becomes induced. But alcohol is certainly not the only target for this enzyme, given the promiscuous range of substrate recognition by Phase I metabolic catalysts. It eventuates that CYP2E1 converts the common analgesic drug acetaminophen (paracetamol) into toxic derivatives, and when high CYP2E1 has been induced, serious liver toxicity can result from taking normally innocuous acetaminophen doses. Here, then, is another link between a higher-level behavior (albeit pathologized by alcohol addiction) and a ‘blind’ molecular process, both of which elicit ‘autotoxic’ effects.
In mice, the effects of CYP2E1 can be dramatically documented with gene knockouts. Removal of CYP2E1 activity by genetic ablation greatly reduces acetaminophen toxicity. Toxicity for another one of its substrates, benzene, is similarly removed, whereas normal mice given comparable benzene doses are severely affected.
Xenobiotics and Induced Receptor Activity
Now to consider a different pathway for self-inflicted deleterious effects from xenobiotics. Here the focus will be on the highly toxic compound 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD), very often simply referred to as dioxin (although ‘dioxin’ per se is not chemically specific, and is also used to refer to related compounds.
Structure of TCDD
The combination of very high toxicity, environmental persistence, and generation as an unwanted industrial by-product render TCDD as a special problem. It came into particular prominence following the major contamination and human exposures resulting from the chemical plant accident at Seveso, Italy in 1976. Another boost to the notorious reputation of this compound came in 2004 from its use in the deliberate poisoning of the former president of the Ukraine, Victor Yuschenko, evidently an assassination attempt. (It is a little ironic that he is the second individual living within the former territory of the Soviet Union to be featured in this series of posts as a notable victim of a malicious toxic assault. The other person of note was the Russian Alexander Litvinenko, who succumbed to radioactive polonium, as noted previously).
TCDD exerts its effects through binding to the above-mentioned aryl hydrocarbon receptor (ArHR), resulting in its prolonged activation and pathological expression of ArHR target genes (when the receptor is translocated to the nucleus and acts as a transcription factor). As a xenosensor (depicted in the Figure A above), the ArHR activates expression of various downstream xenobiotic-metabolizing enzymes, but TCDD is a poor target of them, being very slowly metabolized. Combined with its high fat solubility, this compound has very long persistence in humans, and its in vivo presence is thus associated with long-term over-activation of the ArHR.
Animal studies provide strong evidence that binding and activation of the ArHR by TCDD is its primary, if not exclusive, mechanism of toxicity. Mice with the ArHR gene ‘knocked out’ by gene targeting technology show a stunning resistance to TCDD-mediated effects. (Although the ArHR certainly has normal physiological roles, and animals lacking this receptor show certain defects, they can grow to maturity and thereby allow such toxicological studies to be conducted).
To refocus on the ‘autoimmune’ theme of this section, consider that in toxicology, it is commonly stated that ‘the ArHR receptor mediates dioxin toxicity’ (or in words close to that effect), reflecting the inescapable conclusions of the above knockout data and many other studies. Rather than a poison directly damaging the functioning of an organism, in this case the poison only creates havoc by effectively enlisting a host factor to initiate self-harm. Consider also that if the ArHR is a component of a system with an important role in detoxifying xenobiotics, then for TCDD (and other known chlorinated polycyclic compounds) the process is subverted towards a biologically dysfunctional end. As such, the TCDD / ArHR precedent would appear to be a classic exemplar of the analogy of ‘misfired’ xenobiotic responses with autoimmune reactions.
Xenobiotics and Real Autoimmunity
Although TCDD and the ArHR are thus used to exemplify a self-damaging process analogized with autoimmunity, ironically they also provide a very cogent link with autoimmunity which is real in every sense. Evidence suggests that xenobiotics can induce autoimmunity by at least two processes, somewhat reminiscent of the above two pathways for xenobiotic-induced self-damage itself. There is now considerable experimental data supporting the contention that self-proteins which have become modified by reaction with xenobiotic compounds (‘modified self’) can trigger immune reactions which cross-react with normal self-structures, and thereby trigger an autoimmune response. This kind of effect has often been termed ‘molecular mimicry’ elicited by the xenobiotic-derived host neoantigens. Alternatively, modification of host proteins by foreign chemicals may generate self-recognition of otherwise cryptic self-epitopes. Exposure to certain heavy metals (mercury in particular) can also trigger unequivocal autoimmunity in animal models, probably by similar mechanisms.
Theoretically, a second broad means of physiological modulation by xenobiotics which might lead to autoimmunity could be differential effects on specific immune cellular regulatory subsets. Real evidence towards this comes from the TCDD / ArHR system once more. It turns out that a special effector helper T cell subset (TH17) bears the ArHR receptor, and prolonged signaling induced by exposure to TCDD activates these cells and exacerbates the development of autoimmune disease in mouse models. (Knockout mice lacking the ArHR accordingly lack this TCDD-induced susceptibility to autoimmunity).
So it seems indisputable that there are good grounds for proposing real intersections between xenobiotic processing, its perturbation, and autoimmune phenomena. And there we leave it for the time being. A point to consider in the next post is why some xenobiotics trigger actions which result in self-damage rather than clear detoxification.
To finish, here are two biopoly(verse) offerings. The first is made with respect to genetic influences on xenobiotic recognition, while the second refers to self-damaging responses to xenobiotic challenge:
People say, ‘So choose your parents well’
For your genotype surely will tell
How well you survive
And prosper and thrive
In a toxicological hell.
The war against toxic attrition
Is a physiological mission
But within this good fight
There are factors that might
Link self-harm as a point in addition.
References & Details
(In order of citation, giving some key references where appropriate, but not an exhaustive coverage of the literature).
‘…..proteins called metallothioneins….’ These proteins (of which there are several classes) also have roles in the transport and delivery to specific subcellular sites of metal ions required for normal metabolic function. Metallothionein-mediated protection against metal ion toxicity is best characterized in the case of cadmium, but is also implicated in protection against mercury and possibly lead toxicities. For more detail see Klaassen et al. 2009; Sutherland & Stillman 2011; Gonick 2011.
‘……a variety of endogenous antioxidants…..’ These include Vitamins C and E, glutathione, and numerous others. For a review, see Rizzo et al. 2010.
‘…..the perception of bitterness is mediated by the ‘Type 2’ taste receptors….’ See Behrens & Meyerhof 2009.
‘…..different TAS2R receptors respond to different bitter tastants….’ In this respect, see an article about a database of compounds with bitter taste (Wiener et al. 2012), one of whose aims is to promote the understanding of the recognition of target molecules.
‘ Each [bitter taste] receptor is likely to have its own pattern of structural recognition….’ See Meyerhof et al. 2010.
‘….the processing of ingested drugs …..is divided into three metabolic phases…..’ For aspects of these phases, see Nakata et al. 2006.
‘….. the pregnane X receptor (PXR) and constitutive androstane receptor (CAR)….’ See a review of Tolson & Wang 2010.
‘….other xenosensing nuclear receptors exist.’ These include the peroxisome proliferator-activated receptor (PPAR), the farnesoid X receptor, and hepatocyte nuclear factors (1alpha, 3 and 4alpha). See Dixit et al. 2005; Xu et al. 2005.
‘…..the aryl hydrocabon receptor.…’ For a general background, see Abel & Haarmann-Stemmann 2010.
‘…..the deleterious effects of both heavy metals and mineral fibers was noted. In both of these cases, genetic influences on host responses have been recorded….’ For the heavy metals mercury and lead, a number of genes have been implicated (see Gundacker et al. 2010). In the case of mineral fiber-related diseases (especially mesothelioma caused by asbestos), it was noted in a previous post that cofactors were certainly involved. A genetic predisposition towards mesothelioma resulting from another mineral fiber (erionite) has been identified through family studies in Turkey (Dogan et al. 2006; Below et al. 2011).
‘ It might be thought that any innate defense system would bypass this problem [autoimmunity]….’ As an example of this point of view made before contrary evidence emerged, see Medzhitov & Janeway 2000.
‘….certain aspects of innate immunity can indeed help trigger autoimmunity under circumstances.’ Without going into too much detail, this can involve circumstances where specific types of innate recognition are controlled by cellular compartmentalization, and its perturbation in pathological states. A little more is provided in the adjunct ftp site (Extras; Chapter 3, Section A3) for Searching for Molecular Solutions. See also Rai & Wakeland 2011.
‘…….benzo[a]pyrene is modified by Cytochrome P450 enzymes to an active epoxide derivative…….’ See Ling et al. 2004.
‘….and finally into carbon dioxide and water….’ This occurs via the formation of acetyl-CoA and the citric acid cycle, described in any standard biochemistry text.
‘…..acetaldehyde, a known carcinogen….’ / ‘….ALDH2*2 ….. the ADH allele 1C*2.‘ / ‘….the role of acetaldehyde in ethanol-induced cancers is well-characterized.’ For more background information on these topics, see Visapää et al. 2004; Seitz & Stickel 2009.
‘……..drugs inhibiting the ALDH2 enzyme [have] been the basis of one type of treatment for alcoholism….’ The classic drug in this regard is disulfiram, although the merits of its use are still controversial. See (as an example from a large literature) Jorgensen et al. 2011.
‘…..some individuals are highly sensitive to the effects of acetaldehyde produced from ethanol directly as a result of their genetic background (the ALDH2*2 or ADH 1C*2 alleles )…..’ A very interesting recent development is the observation that the ALDH2*2 mutation results in incorrect protein folding, a defect which can be corrected by a low-molecular weight ‘chemical chaperone’. (See Perez-Miller et al, 2010) Thus, in the near future perhaps enforced non-drinkers may become capable of imbibing alcohol by co-use of drugs assisting their endogenous defective aldehyde dehydrogenase enzymes, although it’s possible that not everyone would agree that this is a good thing.
‘……chemical plant accident at Seveso, Italy in 1976…..’ For more on this, see an online article from Time magazine. The accident was clearly associated with many cases of a skin disorder caused by dioxin (chloracne), but the effect of dioxin exposure on cancer rates in the exposed Seveso population have been controversial. In this regard, see Boffetta et al. 2009. (TCDD is clearly carcinogenic in animal models).
‘…..the deliberate poisoning of the former president of the Ukraine, Victor Yuschenko….’ Despite bearing massive amounts of TCDD, Yuschenko survived, albeit with severe chloracne, with his symptoms slowly improving over several years. His clinical profile has been studied and reported (see Sorg et al. 2009). Unless the intention is to simply cause great pain, discomfort and disfigurement, TCDD would seem a foolish choice for malicious poisoners. Unlike rodents and other mammals, humans are not particularly susceptible to lethal effects from TCDD. Also, its overt clinical manifestation (chloracne), its in vivo persistence, and its ready detection render intoxication with TCDD easily proven.
‘ Mice with the ArHR gene ‘knocked out’ ….. show a stunning resistance to TCDD-mediated effects.’ For a review of such studies, and other xensosensor knock-outs, see Gonzalez et al. 1995.
‘…..the ArHR certainly has normal physiological roles……’ See Abel & Haarmann-Stemmann 2010 for background information on ArHR biology.
‘……self-proteins which have become modified by reaction with xenobiotic compounds …… thereby trigger an autoimmune response.’ Although protein modifications by xenobiotics have been known for over half a century, much research in the past few decades focused on DNA chemical adduct formation, given the obvious link in such cases with mutation and aberrant DNA processing or replication. More recently, it has become clear that protein damage too can have grave pathological consequences, of which autoimmunity is a significant part. The study of xenobiotic-mediated protein adducts has greatly benefited from recent advances in proteomic technology. See Liebler 2008 for a detailed exposition of these matters.
‘…..Exposure to certain heavy metals (mercury in particular) can also trigger unequivocal autoimmunity….’ See a review by Schiraldi & Monestier 2009.
‘…..a special effector helper T cell subset (TH17) bears the ArHR receptor, and prolonged signaling induced by exposure to TCDD …… exacerbates the development of autoimmune disease…..’ For more on this (and other aspects of TCDD effects on immunity) see Veldhoen et al. 2008; Esser et al. 2009.
Next biopolyverse offering to be posted in May, given current commitments.
The theme of the previous post concerned how human diseases could be triggered by environmental compounds with slow and subtle effects, with an emphasis on those which occur naturally. (The interest in natural exemplars of such effects arises from earlier posts on ‘Natural Molecular Space’). A principal theme in this follow-up post will be comparing cancer and cellular degeneration induced by environmental agents.
Subtle Carcinogens and Other Problems
With the exceptions of Polonium-210 and asbestos, the ‘subtle poisons’ considered previously were neurotoxic organic molecules. But organic cancer-causing compounds have been described for a long time. The first description of an association between a specific cancer and an industrial (work-related) activity dates back to the 18th century, when a rare form of scrotal cancer was linked to chimney sweeping. From the time of publication of this finding, it took almost 160 years for science to advance enough such that the active component in soot and coal tars was identified as benzo[a]pyrene, a polycyclic aromatic hydrocarbon. Certain other such polycyclic aromatics are also carcinogenic, and they are thus known collectively as PAHs.
Of course, we now know that a whole zoo of both natural and artificial compounds can induce cancer, with varying degrees of potency. It isn’t the intended scope of this post to review a great number of specific cases here, but among the natural set of known carcinogens, an important group are derived as secondary metabolites of various fungal organisms (metabolic products which are not essential components of fundamental life-support processes). While some secondary metabolites (such as antibiotics) have been extremely beneficial to humans, a ‘dark side’ of such secondary metabolism also exists. Not all toxic fungal products (or mycotoxins) are proven carcinogens, but some most certainly are. Probably the most significant in economic and human disease impact are a group of closely related compounds called aflatoxins, produced by various species of the fungal Aspergillus genus (most notably A. flavus). Aflatoxin B1 is the most potent known natural liver carcinogen, and a major problem as a side-effect of fungal contamination of foodstuffs, such as peanuts.
Sometimes it is the case that carcinogens are not directly found in certain natural food materials, but are actually formed during cooking processes. There is a certain irony here, because on the whole, cooking of many foods is beneficial through the killing of potentially dangerous parasites, especially those harbored in raw meat. And apart from the generally detrimental effects of parasites on health, a number of such organisms are themselves directly linked to the generation of specific cancers. Yet during ordinary cooking of meats, carcinogenic heterocyclic amines can form, and if charring is involved (as with barbecuing), polycyclic aromatic hydrocarbons can be created. Among the latter is found benzo[a]pyrene, the same compound of chimney sweep fame as noted above. Strictly speaking, carcinogens formed by cooking are not ‘natural’, since they require human intervention for their formation. Indeed, cooking itself has been considered a useful marker for distinguishing humans from all other organisms, including our primate relatives, and may have even shaped evolutionary pathways leading to modern humans. Still, while carcinogenic compounds resulting from cooking clearly arise from human agency, their formation has always been completely inadvertent, and occurred long before the faintest glimmerings of modern chemistry.
Subtlety of effect, at least as measured by the time between exposures and onset of disease, is practically a by-word for carcinogens, as well as the ‘subtle’ neurotoxic agents considered in the previous post. This is not to say that these two broad areas of pathology cover everything where subtlety rears its head, but they may safely be grouped as the major concerns. Beyond this, one needs to consider other physiological systems which may be damaged or negatively affected slowly and subtly by non-biological environmental agents, but not with tumorigenic outcomes. One case in point is the immune system, and there are precedents for natural compounds with immunosuppressant qualities. In this respective, it should be noted that toxic compounds can have multiple effects, and aflatoxins (for example) have immunosuppressive activity as well as their other noxious manifestations. Reproductive systems can be adversely affected by natural phytoestrogens, as considered in more detail in a previous post .
These other issues aside, cancer and toxic neurological disease can be seen as book-ends in terms of the gross effects leading to divergent pathological results. Let’s consider this statement a little further.
Growth or Degeneration, and a Problem Either Way
A toxic challenge will by definition perturb normal cellular functions. Following such an event, broadly speaking three things can happen. Firstly, an affected cell may, through its endogenous repair system, correct the damage and resume its normal functions. Failing this (in the second alternative) it can die, through a number of alternative mechanisms noted in the legend to the figure below. The best-defined form of directed or ‘programmed’ cell death is the process termed apoptosis. But if death itself should fail and replication continue, chromosomal changes induced in the cell may eventually lead to ‘transformation’, where the normal controls on growth are circumvented and a tumor phenotype acquired, the third possible outcome. Successive genetic changes can accumulate, and transformed cells with invasive properties become amplified through their enhanced growth and survival properties. It is no accident that important genes regulating apoptosis are frequently mutated in cancer cells. If checkpoints on cell growth are removed through blockade of cell death, barriers to transformation may be greatly reduced. Indeed, while most carcinogens are also potent mutagens (inducing genetic mutations in genomic DNA), some are not. The latter have been a long-standing puzzle, but it has been shown that non-mutagenic chemical carcinogens are direct blockers of apoptosis, thereby allowing cells with mutations (normally removed by apoptosis) to persist and proceed down transformation pathways.
As noted in the previous post, recovery from a toxic insult might not necessarily be complete, in the sense that the post-toxic state may be sub-optimal relative to the norm, predisposing the cellular victims to future risk. But leaving such complications and the general area of damage repair aside, the major enduring pathological consequences of a low-level toxic assault revolve around cancer vs. degeneration. These outcomes might seem like diametrically opposed processes, since in one case cells grow wildly without normal constraints, and in the other, they die. While the final end-points are clearly quite divergent, it is interesting that the factors which push cells along these pathways have many regions of overlap. Genetic analyses have shown that many mutations which predispose towards Parkinson’s disease are also associated with certain cancers.
But toxic chemicals can also have dramatically different effects depending on the cellular context in which they act. A specific genotoxic (DNA-damaging) compound found in cycad plants (methylazomethanol) can induce neurological damage and degeneration in mice without tumor formation, whereas a high frequency of tumors are induced in the colon. Major determinants of the outcome of such toxic challenge are the levels of appropriate DNA repair enzymes (the effectiveness of the DNA damage response), and differential effects on cellular signaling pathways. Up- or down-regulation of specific pathways operating in diverse cell lineages can evidently result in outcomes as distinct as death or degeneration. A clear distinction between neurons and most other differentiated cells is their cell division status, where non-dividing and long-lived neurons can be contrasted with lineages with active turnover through cell division. Neurons thus permanently exit from the cell cycle into a ‘post-mitotic’ state for the lifetime of the organism.
Indeed, trying to force a mature neuron towards re-entering the cell cycle (by artificially expressing viral gene products which ‘kick-start’ cycling in other quiescent cells) has been observed to promote cell death. Given this piece of information, a differential response to at least some toxic agents can conceptualized in fairly simple terms: Forcing a quiescent cell which is nonetheless ‘primed’ for mitotic cycling (active division) may lead to carcinogenic transformation; doing the same thing to a mature neuron will kill it. This dichotomy is portrayed in the figure below:
Outcomes of mutational damage through low-level genotoxic exposure for neurons vs. non-neuronal cell lineages. In both cases, repair mechanisms exist, which may be insufficient to deal with the problem. Dividing cells may then be diverted into a programmed cell death pathway (usually apoptosis) and thus removed. In a population of renewable replicating cells, this is unlikely to be a direct problem, and of course eliminates a potentially dangerous altered cell. Yet if the shunt towards apoptosis fails for any reason, the altered cell may continue to proliferate and acquire further mutations, with the ultimate consequence of malignant transformation into a fully cancerous phenotype. For neural cells, beyond a certain damage threshold, death is inevitable, even for stimuli that normally promote mitosis in other cell types. Note here that cell death in general can occur by at least three mechanisms, shown specifically for neurons in this schematic. The process of autophagy (a kind of self-recycling of cellular components) is associated with repair processes, but can also constitute a specific cell death mechanism in some circumstances. Apoptosis is a programmed form of death operating through specific cellular signaling interactions, and autophagy can interface with some of these apoptotic pathways (as shown by arrows). Necrosis was originally categorized as non-specific and passive cell death brought on by severe physical or chemical insults. While cell death in such a disordered manner is presumably still a possibility, recent evidence has shown that at least one form of non-apoptotic ‘necrotic’ cell death is also a regulated process, which has been duly termed ‘necroptosis’. In any event, unlike loss of cells with a high mitotic turnover, death of non-dividing neurons will ultimately have significant functional consequences.
And yet the above figure might, upon further reflection, appear overly simplistic. Two pieces of extra widely-known information are relevant here: neurological tumors, and neurological plasticity. If neurons die upon transformation, what is the source of brain tumors? And what about the considerable publicity which has been given to the previously unsuspected potential for recovery from substantial brain damage, indicative of ‘neuroplasticity’? In both cases, a simple answer is that neither the ‘dark side’ of neural tumors nor much brighter prospects for neural regeneration derive from irrevocably post-mitotic cells. In both cases, neural stem cells, identified only relatively recently, may be the central players. With respect to tumors, the “neural stem cell hypothesis” proposes their origin as the source of primary brain tumors, as opposed to metastatic tumors which originate elsewhere but migrate to the brain and grow there.
The above figure addresses genotoxic substances, but this does not include other neurotoxic agents such as the compound MPTP, discussed in the previous post owing to its specific role in inducing cell death in the substantia nigral brain region, and thereby leading to induced Parkinson’s disease. As noted previously, MPTP has a distinct toxic mechanism via the enzymatic formation of a specific metabolic product, which is taken up by dopamine-producing neurons. This metabolic derivative then inhibits mitochondrial respiration, leading to cell death. Studies have found, however, that MPTP also has mutagenic properties – or at least, once again, one of its metabolic products is the active compound in such assays. Yet even if MPTP indirectly caused unrepairable genomic lesions in target neurons, the above observations suggest that cell death would still be the case anyway, rather than prolonged growth and transformation.
To conclude then with a summary of sorts upon this theme:
It seems cancer and neural decay
Are opposed, in a particular way
For to die or to grow
Is the question, you know
And the source of young Hamlet’s dismay
References & Details
‘ ….an association between a specific cancer ….. chimney sweeping‘ This resulted from observations by Dr. Percival Pott (1714-1788) for scrotal cancer in young chimney sweeps, first published in 1775. See Brown & Thornton 1957 for relevant historical information.
‘….the active component in soots and coal tars was identified as benzo[a]pyrene….’ See Ling et al. 2004 for more information, especially including the structure of benzo[a]pyrene-DNA adducts, by which its carcinogenicity is manifested.
‘…..a whole zoo of both natural and artificial compounds can induce cancer…..’ For a review on the diversity of carcinogens (including not limited to organic compounds), see Yang 2011.
‘……toxic fungal products (or mycotoxins)…..’ For a useful review, See Pitt 2000.
‘Aflatoxin B1 is the most potent known natural liver carcinogen….’ See a review by Hedayati et al. 2007.
‘…..a number of different parasites are themselves directly linked to the generation of specific cancers….’ If we consider ‘parasites’ in the broadest sense, then there are numerous precedents of viral and bacterially-generated cancers. But in food-related circumstances, ‘parasite’ will most often refer to various worms, some of which are indeed associated with cancer. For example, see Vennervald & Polman 2009 for a review of the status of helminth worms as carcinogenic agents.
‘…..if charring is involved……polycyclic aromatic hydrocarbons can be created….’ See Daniel et al. 2011.
‘……cooking ….. may have even shaped evolutionary pathways leading to modern humans…’ The ‘cooking hypothesis’ has lead to a very interesting book by Richard Wrangham, Catching Fire: How Cooking Made Us Human; Basic Books 2009.
‘……there are precedents for natural compounds with immunosuppressant qualities….’ Although in normal circumstances immunosuppression is obviously undesirable, for some medical applications certain natural immunosuupressants have proved a great boon. Such compounds have proven highly useful for suppressing unwanted immune rejections of transplanted organs, and thereby greatly facilitated the efficacy of transplant surgery in general. These include cyclosporin A and FK506, which form ternary complexes between cellular proteins (cyclophilin and FKBP respectively) and the protein phosphatase calcineurin. See Fox & Heitman 2002 for a review.
‘……while most carcinogens are also potent mutagens…….some are not….’ See Kokel et al. 2006, for presentation of evidence that specific non-genotoxic carcinogens act by suppressing apoptotic cell death.
‘…..many mutations which predispose towards Parkinson’s disease are also associated with certain cancers….’ See Devine et al. 2011.
‘……found in cycad plants…..’ See the previous post for more detail regarding the neurotoxic effects of cycads.
‘A specific genotoxic (DNA-damaging) compound found in cycad plants (methylazomethanol) induces neurological damage ….’ This refers to an interesting publication of Kisby et al. 2011.
‘…..cell death…..’ For further information regarding autophagy, see Kaushik & Cuervo 2006. For a perspective on apoptosis in the light of the recently described necroptosis, see Christofferson & Yuan 2010.
‘……publicity which has been given to the previously unsuspected potential of recovery from substantial brain damage….’ Neuroplasticity has received much popular notice largely owing to the book The Brain That Changes Itself, by Norman Doidge. Viking, 2007. Note that one aspect of neuroplasticity, the ability of neurons to exhibit ‘regenerative sprouting’ from axons, is not the same as acquiring the ability to undergo full cell division. See Weiloch & Nikolich 2006.
‘…..the “neural stem cell hypothesis” proposes their origin…..’ For more information, see Germano et al. 2010.
Next Post: Late March.
The past series of posts have largely been preoccupied with the benefits to be had from ‘natural molecular space’, whether the molecules in question are large, small, or functionally linked together in complex (but useful) entire biosystems.
Obviously, some biomolecules are not merely useless, but may be actively harmful. There are a great many bioproducts which are of both high toxicity and obvious impact, at least to the unfortunate victims of serious or even life-threatening natural poisonings or envenomations. But toxic effects can be much more subtle, and therefore much less easily noticed. In fact, the insidious slowness of some toxic effects can render the actual molecular culprits very hard to pin down, and inevitably controversy is thus generated. These ‘subtle negative’ environmental influences are the principle theme for this discussion, which will include natural products, but will also heavily feature both artificial compounds and non-biological but ‘natural’ substances. (The quotation marks are used here since it is very often only through human activities that natural materials with potentially harmful effects are processed and brought into contact with sizable numbers of people).
What Does Subtlety Mean in a Toxic Context?
When we speak of a subtle toxic effect, what is actually meant? It might result from several factors, or any combination of them, including potency, exposure dose, frequency of exposure over time, and the in vivo persistence of the toxic substance. Any ingested toxic compound must by definition interfere with an important biochemical process, with ensuing negative consequences for the functioning of the organism. A poisonous substance might interact with many different biological molecules, but some of these will be of greater import than others in terms of how the resulting deleterious effects are produced. And the affinity of the poison for such biological targets is a determinant of potency.
Potency and dosage over time are inter-related. To qualify as ‘subtle’, intake of a highly potent compound (one whose toxic threshold is reached with very small amounts) would need to be in exceedingly low quantities, where no immediate effects are apparent. If that was the end to it, then obviously such a low-level exposure to the toxic agent has no further consequences. But a subtle deleterious effect might exist if the compound had produced some kind of persistent tissue or cellular damage, of a type that was very hard to detect without sophisticated intervention, and that was not at all appreciable by the individual concerned. Then, several possibilities could exist which in the end would result in a manifested disease state. Firstly, if the individual is re-exposed to the same source of the toxin on more than one occasion, the damage might be cumulative and accrete until it becomes of such significance that an overt illness is produced. If the body’s repair systems cannot comprehensively deal with the low-level induced damage, in some cases even long intervals between exposures might still result in noticeable pathology. But even if the repair is effective, regular intake of similar low doses of the toxic material over time might eventually overwhelm the host defenses, again leading to disease.
These scenarios assume repeated exposures, but even a single exposure could potentially have significant consequences. It might be supposed that a single bout of damage, if not fully repaired, might be another negative event in an individual’s ‘wear and tear’ list that increases with ageing. In other words, any such a low-grade but persistent toxic ‘insult’ might become more significant over time, in combination with other problems inevitably occurring through life. But a much more serious possibility also has been proposed, where short-lived exposure to certain chemical agents might actually set up an on-going pathological inflammatory process, even long after the original poison has been removed from the host system. This theme will be looked at in a little more detail in a later post in this series.
At this point, it’s very relevant to consider that there is an important issue relating to the physiological removal of toxic agents, or (in other words) how long it may be that noxious substances of any description can persist once taken into a host organism. Persistence has clear-cut implications for the ability of a substance to contribute to long-term and subtle deleterious effects. While water-soluble (hydrophilic) compounds are generally metabolized and excreted reasonably quickly, lipid-soluble (hydrophobic) compounds can be taken up by fat reserves and remain there for years, with only a slow diminution with time. A classic example in this regard is the insecticide DDT, whose tendency to persist in adipose (fat) tissue is well-described. Poisons which are themselves toxic elements obviously cannot be further ‘broken down’ chemically, and can persist through their interactions with normal biomolecules. For example, heavy metals such as lead and mercury can bind and inhibit numerous enzymes. Although the resulting complexes between metals and protein molecules may be physiologically degraded, release of the metal component may simply liberate it for another cycle of inhibition. In some cases, a noxious element may be physically or chemically similar to a normal biologically-used element, and replace it in certain biomolecules, with disastrous effects on metabolic activities. This is case for the toxic elements arsenic (capable of competing with phosphorus) and thallium (capable of competing with potassium).
Another major class of persistent and dangerous substances are certain mineral fibers, most notably asbestos. Poorly biodegraded long fibers (such as some mineral silicates, of which asbestos is a case in point) can persist indefinitely in specific anatomical sites. Although the mechanism is still incompletely understood, this can be associated with the generation of a chronic inflammatory process and ultimately carcinogenesis. The link between asbestos and mesothelioma is well recorded.
If we cast a wide enough net, another class of non-biological poisons must certainly be included: radionuclides, or radioactive isotopic versions of the elements. These can be either radioactive isotopic versions of normal elements of biological significance, or radioisotopes of non-biological elements. All such cases can be of either natural or artificial origins. Many examples of the former group can be cited, but potassium-40 (40K) is a natural radioisotope of interest, since it is contributes the largest portion of the radioactive background in living organisms. As such, it has been proposed as a major source of natural mutation, although experimental results have suggested that its contribution to mutation must indeed be (if anything) a subtle influence. Cases of relevant non-biological radioisotopes are likewise exceedingly numerous. Briefly, consider the example of polonium-210 (210Po), which can occur naturally, or can be generated by artificial nuclear reactions. This radioisotope is present in tobacco smoke, and it has been implicated as a major factor in the generation of smoking-induced cancer. Polonium-210 has also been in the news in recent years, through its use an exceedingly potent poison in the murder of the ex-Russian agent Alexander Litvinenko in London in 2006. There’s obviously nothing subtle about that, but as with any toxic agent, even polonium-210 can exert low-level effects if ingested in small enough doses. At that lower end of the exposure scale, the effects will vary among different individuals, but may contribute to cancers or other conditions, with an overall shortening of life expectancy.
Individual variation in responses to low-level toxic exposure reflect genetic variation in the metabolic processing of foreign compounds, or how the body reacts to the presence of noxious materials. There is much more to be said on this topic, which will be picked up at a later time within this series of posts. But for the time being, we can note this as one of a number of influences bearing on whether a low-level toxic exposure will have longer-term ‘subtle’ effects, depicted in the figure below:
A depiction of the range of various influences which can determine whether a substance could manifest a slow or insidious ‘subtle’ toxicity. Note that an implicit issue within ‘Generation of Ongoing Pathology’ is the ability of host systems to repair and contain toxic insults, as opposed to the generation of responses which are ultimately self-damaging.
The influence termed ‘cofactors’ in the above diagram simply refers to any other non-host factor which can interact with a proposed environmental toxic substance to exacerbate its action, or even be essential for the insidious toxic effect to be manifested in the first place. An interesting example is a putative requirement for the presence of simian virus 40 (SV40) for the generation of mesothelioma by asbestos.
For the rest of this post, I’ll move on to some specific examples of effects which have revealed subtlety in several senses of the word. The first case involves an artificial compound which is not strictly speaking an ‘environmental’ effect, since it required self-administration, if inadvertently. However, the experience with this compound has had many ramifications which do impinge on environmental influences, both man-made and natural.
(1) Parkinson’s Disease & Toxic Agents
In the early 1980s a remarkable series of events occurred which had implications across several fields of science and medicine. Although terrible and tragic in many ways, it provided a dramatic example of how a toxin can produce quite specific neurological effects, and had direct implications for the origins of Parkinson’s disease (PD). At that time in California, clinicians were confronted with a series of drug addicts in a state of ‘frozen’ mobility, which had many similarities to severe PD. Subsequent scientific detective work showed that this apparent similarity was more than just superficial. The sporadic condition of human PD is characterized by ongoing degeneration in a region of the brain called the substantia nigra, where destruction of neurons normally producing the crucial neurotransmitter dopamine leads to loss of muscular motor functions, eventually immobilizing the patient. These neurons are also pigmented, through the production of a type of melanin (‘neuromelanin’), an early observation which provided the name of this brain area (‘substantia nigra’ = Latin for ‘black substance’). A compound, L-dihydroxyphenylalanine (L-DOPA, which can access the brain and becomes metabolized to dopamine itself) can greatly alleviate symptoms, especially when first applied. The ‘frozen’ addicts likewise generally showed responsiveness to L-DOPA. By analyzing their common activities, the source of the problem was tracked down to their injection of a street drug preparation, a ‘synthetic heroin’, which in actuality was a botched attempt to make the drug meperidine (pethidine). The preparation that the clandestine chemists had produced contained sizable amounts of a different compound, N-methyl-4-phenyl-1,2,5,6-tetrahydopyridine (MPTP), eventually identified as the toxic culprit by means of animal testing. These studies also showed that MPTP ingestion resulted in specific damage to the substantia nigra, with associated loss of dopamine-producing neurons and the onset of parkinsonian symptoms.
Structures of some relevant molecules for the Parkinson’s / MPTP story. The amino acid phenylalanine is included as the precursor to dopamine, and to show its chemical similarity to L-DOPA. Meperidine is the drug towards which abortive synthetic attempts led to the formation of MPTP. MPP+ is the actively toxic metabolic product derived from MPTP itself.
The striking features of this story were widely reported in the scientific literature, and even found their way into popular fiction quite quickly. Those unmistakably victimized by MPTP had varying fates, ranging from death within a relatively short time, to survival for over a decade. But behind the initial cadre of severely affected patients, the prospect still remains of many more people developing PD from short-term exposure to MPTP (and initially subclinical damage) even decades ago. And this naturally raises one of the major implications of the whole MPTP saga: if a defined toxin can have such amazingly specific effects, could there not be other toxins in the environment with similar properties, which induce the neurodegeneration seen in ‘sporadic’ parkinsonian patients? In the course of these kinds of speculations, it was noted that the very description of this disease was a relative latecomer in 1817. Could the apparent lack of reporting of this disease in earlier times mean that ‘natural’ PD is actually a toxic condition, associated with the beginnings of the industrial revolution and newly introduced environmental pollutants?
Many studies have been conducted in order to evaluate this and related questions. In particular, exposure to certain insecticides has been a long-standing suspect as a potential agent of PD, but despite ‘probable cause’, this has not been firmly nailed down. These kinds of analyses must distinguish between genetic influences and environmental factors. (Many distinct genes are known to affect an individual’s susceptibility to PD, and this will be further considered in a subsequent post in this ‘subtle’ series). Studies with monozygotic (identical) twins illustrate this. In one detailed 1999 investigation, sets of monozygotic twins showed no significant differences in the concordance (common incidence in both twin pairs) of PD compared to non-identical twin pairs, but only (and this a crucial point) if the age of onset for either twin was after 51 years of age. Non-concordance of a disease in twin pairs in a controlled study is highly suggestive of environmental causes at least being contributing factors. Consider that if a disease does have a simple genetic origin, significant concordance would be expected in the (essentially) genetically identical pairs. Most cases of sporadic PD occur later in life, also consistent with (but far from proof of) a slow induction from environmental sources. But where PD does occur at younger ages, genetic influences (rare mutations, possibly in combination with environmental factors) might be postulated, and this is consistent with the higher concordance observed with identical twins with relatively young ages at the onset of PD. But the only general conclusion typically made at present is that the origin of sporadic PD is complex, with multiple genetic and environmental influences implicated directly or as suspects. And yet there is no question that, at least in certain genetic backgrounds, MPTP alone can induce a pathology with the key characteristics of PD. How does it do this?
A Stealth Poison At Work
Intensive studies on the mechanism of MPTP toxicity revealed that it was not the direct perpetrator of the neuronal damage. MPTP itself is acted upon by a specific enzyme within the brain, monoamine oxidase (MAO) B, which converts this compound into a positively charged species, the N-methyl-4-phenyl-pyridinium ion (MPP+, as shown in the above chemical structure figure). Consistent with this observation, inhibitors of MAO enzymes are protective against the effects of MPTP in animal models. MPP+ itself is capable of using the machinery for dopamine transport into neurons (using specific dopamine receptors), and this promotes its accumulation in very specific neuronal sites. It is important to note that this particular uptake mechanism also explains the high selectivity of MPTP (the precursor to MPP+) in its toxic action. Once taken up by dopamine neurons, MPP+ itself acts as a primary toxic agent towards mitochondria, through its inhibition of Complex I of the mitochondrial respiratory electron transport chain.
With the MPTP story, a series of processes are thus required for the ultimate toxic effect to be manifested: conversion to MPP+, uptake by dopamine neurons, and inhibition of mitochondrial activities. (These are primary factors; other issues such as specific genetic backgrounds certainly contribute to individual susceptibility, as will be discussed further in a subsequent post). So, it has been noted that this conjunction of requirements would (hopefully) render the occurrence of compounds with analogous properties to MPTP quite rare. With this in mind, are there natural precedents for this kind of noxious chemical agent? This raises the second case set to be considered (as noted above): natural toxic substances with ‘subtle’ actions. In many such circumstances, the subtlety is bound up with the difficulty of pinning down the true identity of the pathogenic culprit.
(2) Cycads, Soursops, and other ‘Environmental’ Neurological Diseases
In certain Western Pacific islands, epidemiologists have noted for decades an unusual incidence of a degenerative neurological condition called Amyotrophic Lateral Sclerosis / Parkinsonism-Dementia complex (ALS-PDC). In the language of the Chamorros of Guam, a people living on one of the afflicted island groups, the disease is known as ‘lytico-bodig’. A strong role for genetic influences in the origin of ALS-PDC seemed unlikely, given that it was recorded in diverse ethnic groups in varied Western Pacific locations. For a considerable time, though, a dietary item has been implicated: the consumption of a flour made from the seeds of cycad plants available in the affected locales. This remains unproven and controversial, particularly since a specific compound has not been conclusively identified. Yet the general ‘cycad hypothesis’ has support from a number of linked observations. Cycad flour fed to experimental animals over time induces a neurological condition with features of progressive parkinsonism, with associated damage to the substantia nigra. Also, the incidence of ALS-PDC has been in decline in recent years, and this correlates with changes in diet where the amounts of cycad-derived material have markedly declined. A specific amino acid, β-methylamino-L-alanine (BMAA; not found in normal proteins) has been repeatedly linked with cycad-induced disease, but proof of its role has consistently fallen short of the mark. Another contender is methylazoxymethanol (MAM, a metabolite derivative of the cycad compound cycasin), which has been shown to produce neurological genotoxicity.
Whatever the outcome of these studies, there is no question that raw cycad seeds (from which flour is derived) are quite poisonous, and this has long been known to Western Pacific peoples. But by using extensive washing and soaking procedures, they have ingeniously found a way to exploit this otherwise-useless material as a valuable foodstuff. The great irony implicit in the ‘cycad hypothesis’ is that although they succeeded in eliminating the acute toxicity of the cycad seeds, they could not remove traces of toxic substances which may have been the agents of subtle and insidious neurological damage.
Another potential natural molecular assailant of neurons is also found in an island setting, but in the West Indies. A high incidence of an ‘atypical’ parkinsonism has been identified on the island of Guadeloupe. (One example of the atypical nature of this condition is its failure to respond to L-DOPA.) This has been linked by epidemiological studies with the consumption of the tropical fruit called soursop, and a specific compound from this fruit (annonacin) has implicated as the probable underlying source of the pathology. Annonacin is an inhibitor of mitochondrial Complex I, and can also induce loss of dopamine neurons in the substantia nigra of experimental animals – findings which cannot help but stimulate recollection of the MPTP story, even if there are many points of divergence.
Finally, it’s interesting to note that both ALS-PDC of the Pacific and the Guadeloupe disease also have pathological features of ‘tauopathies’, or diseases associated with abnormal intercellular distribution of a protein called tau, which is normally found in conjunction with neuronal microtubules (a part of the cytoskeleton). In addition, one aspect of the neuropathy induced by annonacin is abnormal neuronal tau behavior. But a massively more frequent and consequential tauopathy is Alzheimer’s disease, so these findings raise the fascinating question as to whether environmental toxic agents might contribute to the burgeoning world-wide caseload of Alzheimer’s – and if so, how much, and under what genetic circumstances? The significance of such questions for public health in countries with increasingly ageing populations is obvious.
One point already alluded to above is the notion that a transient ‘hit and run’ exposure to a toxic substance might set up a continuous and actually self-perpetuating cycle of damage. Such a possibility could considerably complicate attempts to identify causative toxic agents. If a single short-live exposure (or transient set of exposures) to an agent can result in disease many years later, it is clear that fingering the original culprit becomes correspondingly more difficult. It remains a possibility that such effects are relevant to the cycad saga at least, but a more detailed consideration of this notion is a topic for a later post in this series.
In the meantime, a biopoly-verse rumination:
Bring genetics and host factors to view
Where some insidious poisons can brew
To stay and remain?
Or start off a chain
Of damage in an unfortunate few.
References & Details
‘ A classic example in this regard is the insecticide DDT……’ (With respect to persistence in fat). See Turusov et al. 2002.
‘….arsenic (capable of replacing phosphorus) and thallium (capable of replacing potassium).’ With respect to arsenic, it is interesting to recall the recent controversy regarding ‘arsenical life’, where arsenic in a specific bacterium was reputedly replacing phosphorus (see a previous post for brief detail on this). Arsenic can compete with phosphorus when it is in the form of arsenate (See Kaur et al. 2011; and also Dani 2011 for a discussion of the biological significance of this). For more details regarding thallium and its competition with potassium, see Hoffman 2003.
‘….release of the metal component may simply liberate it for another cycle of inhibition. This can be overcome if a chemical agent (a chelator) is administered which is capable of tightly binding the metal, solubilizing it, and allowing it to be excreted. See Flora & Pachauri 2010; Jang & Hoffman 2011.
‘….potassium-40 (40K) …. has been proposed as a major source of natural mutation, although experimental results suggest that its contribution to mutation must indeed be subtle influence.’ See Gevertz et al. 1985 for more detail and a refutation of the importance of this radioisotope for mutation, at least in bacteria.
‘…..polonium-210 (210Po), …is present in tobacco smoke, and it has been attributed a major role in the generation of smoking-induced cancer….’ See Zagà et al. 2010.
‘ Polonium-210 has been in the news in recent years, through its use an exceedingly potent poison in the murder of the Russian Alexander Litvinenko…..’ Polonium-210 is an α-emitter (Helium-4 nuclei). While these emitted particles are relatively massive and poorly penetrating, they are very dangerous if an α-source has been ingested. Doses as little as 1 μg may be lethal in susceptible individuals, and doses of several hundred μg will be universally fatal. See Scott 2007. For more details on the Litvinenko case, see a BBC timeline article.
‘….polonium-210 can exert low-level effects if ingested in small enough doses.’ See also Scott 2007.
‘ The influence termed ‘cofactors’ ….. example is a putative requirement for the presence of simian virus 40 (SV40) for the generation of mesothelioma by asbestos….’ See Rivera et al. 2008; Qi et al. 2011. Note that SV40 was a contaminant of early Salk polio vaccine preparations (see Vilchez & Butel 2004).
‘….origins of Parkinson’s disease…..’ This disease (the ‘shaking palsy’) was first described in the early 19th century by Dr. James Parkinson (Thomas & Beale 2007), who thus bequeathed his name to it. Although obviously an eponymous title, the “Parkinson” is often now rendered with a lower-case ‘P’.
‘ These neurons are also pigmented…..’ Melanocytes, the cells in the skin which produce the pigment melanin responsible for skin color (along with the related pigment pheomelanin) are derived from the same embryological origins as neurons, the neural crest.
‘….a type of melanin (‘neuromelanin’)….’ Neuromelanin is chemically similar, but not identical to, the black melanocyte pigment, which itself is often termed ‘eumelanin’. See Zecca et al. 2001.
‘…..the source of the problem [Parkinson-like illness] was tracked down…..’ See Langston et al. 1983.
‘….widely reported in the scientific literature….’ For example, see an article in 1984 by Roger Lewin in Science, whose title (‘Trail of Ironies to Parkinson’s Disease’) speaks for itself.
‘…even found their way into popular fiction quite quickly….’ The well-known ‘new wave’ science fiction novel Neuromancer by William Gibson (Ace Science Fiction, 1984) features a particular scene where an individual is deliberately victimized by means of the nasty aspects of MPTP neurotoxicity. Since the book was first published in 1984, this was at the time a very quick uptake on a scientific and medical development.
‘ Those unmistakably victimized by MPTP had varying fates…..’ See Langston’s popular book (co-authored with Jon Palfreman), The Case of the Frozen Addicts (Pantheon, 1995). Also see a Wired magazine article.
‘…..a relative latecomer in 1817…..’ See the above note about James Parkinson.
‘….‘natural’ PD …. a toxic condition…?’ See Calne & Langston 1983.
‘….exposure to insecticides ….as a potential agent of PD …. not been firmly nailed down…’ See Brown et al. 2006.
‘ Most cases of sporadic PD occur later in life….’ Only 1-3% of total PD cases can be attributable to direct genetic causes (See Lorinicz 2006).
‘….MPTP itself is acted upon by a specific enzyme with the brain, monoamine oxidase….’ See Herraiz 2011 (a).
‘…..inhibitors of MAO enzymes are protective against the effects of MPTP…..’ Herraiz 2011 (b).
‘….also explains the high selectivity of MPTP (the precursor to MPP+) in its toxic action…’ For an early report on MPP+ uptake, see Javitch et al. 1985.
‘….it [MPP+] acts as a primary toxic agent towards mitochondria….’ For a little more detail on mitochondrial activity, see a previous post. For more on Complex I in general, and with respect to MPTP / MPP+, see Schapira 2010.
‘….epidemiologists have noted an unusual incidence ….ALS-PDC…’ For an entertaining account of the history of this topic, see The Island of the Colour-blind (Picador, 1996; Book Two, Cycad Island) by the famous neurologist Oliver Sacks. For a general overview of ALS-PD, see Steele 2005.
‘ Cycad flour fed to experimental animals…..’ See Shen et al. 2010.
‘ A specific amino acid …BMAA….has been repeatedly linked with cycad-induced disease…’ For a review and disputation of this, see Snyder & Marler 2011.
‘ Another contender is methylazoxymethanol….’ See Kisby et al. 2011.
‘….a specific compound from this fruit (annonacin) has implicated….’ See Champy et al. 2004; Lannuzel et al. 2008. Other compounds chemically related to annonacin have also been implicated: See Alvarez Colom et al. 2009.
‘…one aspect of the neuropathy induced by annonacin is abnormal neuronal tau behavior…’ See Escobar-Khondiker et al. 2007.
Next Post: This is the last post for 2011; will be back early next year.
From time to time, it will be appropriate to offer updates (or upgrades) of previous posts when it seems appropriate. In late March, I looked at ‘paradigm shifts’ in biological science, particularly in the context of so-called biological ‘dark matter’. Here a Table was provided with a list of some developments in recent bio-history which could qualify as paradigm shifts, especially against the current background where the meaning of a scientific ‘paradigm’ has been diluted in much of the literature. While this Table was not originally intended to be completely comprehensive, after the fact I have noted that a particularly important case was inadvertently overlooked. That is the subject of the current post.
The Chemiosmotic Hypothesis
Cellular processes require energy, and a universal energy ‘currency’ is the molecule adenosine triphosphate (ATP). It has been long recognized that the hydrolysis of ATP to the corresponding diphosphate (ADP) provides the free energy for driving a host of biological reactions. The synthesis of ATP itself is therefore of crucial significance, and naturally requires an energy source in order for this to be accomplished.
In 1961, a British biochemist by the name of Peter Mitchell published a paper in Nature outlining a novel proposal for the mechanism of the generation of ATP through the electrochemical properties established in certain biological membranes. These are found in prokaryotes, and also eukaryotes via their mitochondria (the ubiquitous organelles concerned with energy production) or chloroplasts (the plant cellular organelles mediating photosynthesis). Mitchell’s ‘chemi-osmotic’ hypothesis postulated that, rather than relying on an energy-rich chemical intermediary, oxidative phosphorylation (the synthesis of ATP from ADP occurring during respiration) was dependent on proton (hydrogen ion) flow across membranes. In essence, respiratory processes pump protons across an enclosed membrane boundary such that an electrical potential is generated across the membrane. Mitchell termed the ‘pull’ of protons back across the membrane as the ‘proton motive force’, or a proton current. This flow of protons could be directed through protein-mediated channels for the purposes of performing useful work.
Although now enshrined within the modern biochemical world-view, in the early 1960s this notion was quite radical, and not at all in tune with many of the ideas of most major researchers in the field at that time. In fact, it took over a decade a half before enough evidence was garnered to convince most remaining doubters. But Mitchell certainly had the last laugh, being awarded a Nobel Prize for his innovative proposal in 1978.
ATP Synthase and the Chemiosmotic Hypothesis
A remarkable catalytic complex at the core of ATP generation, the membrane-associated ATP synthase (ATPase), has had a central role in the ultimate acceptance of the chemiosmotic hypothesis. This resulted from studies on purified components of the synthase complex and reconstitution experiments, where directed proton flow across sealed model membranes (liposomes) was shown to be crucial for ATPase activity. In some ingenious experiments, the required proton flow was produced by the introduction of a protein involved with prokaryotic photosynthesis (bacteriorhodopsin) as a light-driven proton pump. (Other proton pumps from diverse biochemical sources could also perform similar roles). Such findings were subsequently reinforced by numerous structural and functional studies.
The ATPase has been revealed as a molecular motor driven by proton flow directed through the transmembrane (‘Fo’) component of the catalytic complex. The proton current is harnessed to provide energy for driving the physical rotation of the soluble (‘F1’) ATPase component, resulting in ATP synthesis at three catalytic sites. In some amazing cases of experimental virtuosity, this molecular rotation has been visualized in real time using fluorescent tags, and the association of rotation with ATP synthesis demonstrated by magnetic bead attachment to the F1 subunit, followed by artificial rotation induced by appropriate magnets.
The striking nature of the membrane-associated ATPase as a rotary molecular motor has inspired many offshoot thoughts and speculations. As a demonstration of a ‘natural nanomotor’, it would come as no surprise to hear that that the nascent field of nanotechnology has paid particular notice.
Why a Paradigm Shift?
So, it might be immediately seen that the proposal, experimental testing, and ultimate support for the chemiosmotic hypothesis is of great scientific significance, but is it really meaningful to refer to it as a paradigm shift? Well, yes, it is. Firstly, the initial resistance to this idea in itself is consistent with the view of shift in a paradigm requiring the upheaval and dismantling of an earlier view – if not by the death of an aging cadre of reactionary biologists, at least via their eventual accession to the concept through the accumulated weight of evidence.
But perhaps the most fundamental novelty of Mitchell’s ideas came from the inherent aspect of spatial organization of cellular structures in determining function, as he explicitly stated. In his own words, from his 1961 Nature paper:
“the driving force on a given chemical reaction can be due to the spatially directed channelling of the diffusion of a chemical component or group along a pathway specified in space by the physical organization of the system”.
In other words, structures on a cellular scale (membranes, in this case) can serve as a basis for directing biochemical reactions in specific ways, and this general effect has also been termed ‘vectorial biochemistry’. This view was a radical proposal in the early 1960s – and accordingly met with considerable resistance. In fact, cells are not just ‘bags of enzymes’, but partitioned in complex ways into different compartments, and this partitioning is very significant for specific functioning. This is particularly so (as we have seen) for bioenergetics.
The development of some form of membrane compartmentalization of proto-cells during the early stages of the origin of life is recognized as a major evolutionary transition. Its importance can be inferred from simple logic, since an evolving molecular biosystem could never undergo progressive selection and functional advancement were its components not restricted into a bounded spatial compartment. Dilution of reactants would otherwise rapidly remove any useful molecular innovations, and bring in potentially interfering molecules. Included among the latter are likely parasitic systems, whose unchecked activities would be a permanent stumbling block. But the long-term implications of the chemiosmotic principle show us that biological membranes are much more than just phospholipid sacks demarcating collections of biological molecules from the external environment. They are integral and essential parts of biological operations in their own right. And their evolution into these roles is a very ancient event in the history of life. Leaping from early biogenesis to future human aspirations, the importance of membranes and higher-level structures for vectorial direction of function should not be forgotten when artificial cell design is contemplated.
So Mitchell’s contribution is duly inserted into the original ‘paradigm shift’ Table thus:
It is also notable that this year marks the 50th anniversary of the publication of Mitchell’s seminal paper.
And finally, a biopoly(verse) salute to the pioneer:
The hypothesis chemiosmotic
Made Mitchell seem quirky and quixotic
But opinions revise,
And then a Nobel Prize
Sealed the field as no longer exotic.
References & Details
‘……a British biochemist by the name of Peter Mitchell published a paper in Nature…’ See Mitchell 1961.
‘….Mitchell ….. awarded a Nobel Prize for his innovative proposal in 1978.’ See Harold 1978; also the Nobel organization site for the 1978 Chemistry prize. See also a relevant piece in Larry Moran’s Sandwalk blog. Mitchell died in 1992.
‘…..studies on purified components of the synthase complex…..’. A major contributor to these studies was Efraim Racker (1913-1991), A biographical memoir by Gottfried Schatz (National Academies Press, online) provides an excellent background to this and numerous related areas. Paul Boyer and John Walker also were pivotal in structure-function studies regarding ATP synthase, for which they received the Nobel Prize for Chemistry in 1997. For a very recent and comprehensive review of the membrane-associated rotary ATPase family, see Muench et al. 2011.
‘…..the introduction of a protein involved with prokaryotic photosynthesis….’ See Racker et al. 1975.
‘…..nanotechnology has paid particular notice….’ See Block 1997 (Article title “Real Engines of Creation”, which refers to K. Erik Drexler’s book Engines of Creation, a pioneering manifesto of the potential for nanotechnology – Doubleday, 1986). Also see Knoblauch & Peters 2004.
‘…..artificial cell design…..’ See a previous post on synthetic genomes and cells for more on this cutting-edge topic.
Next Post: Regrettably, work commitments enforce a temporary hiatus on biopolyverse posts until early December. But will return then!!
A considerable number of the recent series of posts have been concerned with molecules that can be referred to as drugs. It seems useful here to take a look at this from a semantic point of view.
Drugs at Different Levels
Most people carry around in their minds more than one specific meaning ascribed to the small word ‘drug’. If you hear “He’s a drug dealer” or, “She’s on drugs”, the references are not likely to be to antibiotics or blood-pressure medication. Conversely, the sentence, “My doctor prescribed a drug for me” is most unlikely to refer to crystal meth. As a result, the great majority of people realize (even if only intuitively) that the illegal band of drugs are but a subset of a much larger group, that includes substances both universally approved and potentially truly life-saving. It would then be reasonable to assume the definition of a ‘drug’ in this larger sense should be fairly straightforward.
Some standard dictionary references are along the lines of: “A substance used in the diagnosis, treatment, or prevention of a disease or as a component of a medication” or, “a chemical substance that affects the processes of the mind or body”. The US Food & Drug Administration (FDA) defines drugs with wording introduced by the Food, Drug, & Cosmetic Act of 1938, as “articles (other than food) intended to affect the structure or any function of the body of man or other animals” [Sec. 201(g)(1)]. These definitions are quite broad, especially the latter. But speak of ‘drugs’ with a pharmacologist, and small molecules are most likely to be the topic of conversation, and not just any small molecules. Indeed, the term ‘drug-like’ is frequently used in the general field of drug discovery to encapsulate (so to speak) the special features which a successful medicinal drug should embody. Obviously, a drug must have definable function(s), which means that it must be directed to a specific molecular target or a limited set of targets (very often, but by no means exclusively, proteins). But a number of additional properties are very important if the drug is to successfully survive in an active form long enough to be useful, and to find its way to the desired target when administered to a patient. For example, a simple set of guidelines for evaluating a candidate compound formulated by Lipinski has been termed the ‘rule of five’, owing to the recurrence of five (or multiples of it) in the definition of the useful range of properties to look for. Getting a drug to where it needs to go is the preoccupation of the burgeoning field of drug delivery, which now intersects in many cases with advances in nanotechnology.
These rules were designed expressly with reference to small molecules, since increasing molecular size is often associated with diminishing returns in terms of delivery, and sometimes physical properties such as solubility. But that is certainly not to say that large molecules cannot be useful pharmacological and therapeutic agents. In the previous post, it was noted that it has only been in very recent times (historically speaking) that large proteins (especially antibodies) could be garnered from the biosphere for useful human applications. In fact, (as also noted), antibodies have become a billion-dollar industry, especially where monoclonal or specifically engineered antibody variants are concerned. And these antibodies are often referred to as drugs, especially in lay usage. Though obviously moving beyond a tight pharmaceutical definition of ‘drug-likeness’, this is perfectly consistent with the above broad definitions, including that from the FDA. But there are some gray areas…..
Drugs and Category Overlap
It is quite clear that some classes of substances or preparations may have members which have dual drug and nondrug characteristics. One such case noted by the FDA is the field of cosmetics, where even more narrow types of products differ in this kind of duality. For example, shampoos may be seen as primarily hair cleansing, or cosmetic, preparations, and indeed many are simply that and no more. But certainly some have additional functions, such as treating fungal (dandruff) or louse infestations. In this case, specific compounds are added for the indicated medicinal specifications. Although it is obviously of practical significance for product safety and efficacy requirements that some preparations should be regulated and licensed as both drugs and cosmetics, here the relevant products are mixtures, and specific molecules are not doing functional double-duty. For example, an anti-dandruff shampoo may have many different components, but the most significant are the detergent (usually sodium lauryl sulfate, for the cosmetic washing function) and the dandruff inhibitor (specific compounds such as zinc pyrithione). In other words, we cannot speak of either of these individual compounds as having an overlapping drug / cosmetic function; it is only as mixtures (along with various other materials) that the product as a whole acquires this status.
Yet there are certainly well-defined cases where specific molecules share categorization as a drug in combination with other properties. A prime case in point is a contender for the title of oldest drug used by humans: ethyl alcohol, or ethanol. Its psychoactive and other physical effects clearly indicate its drug status, but ethanol can also be metabolized to yield specific calorific value. As such, it is then a food as well as a drug. This is straightforward, but some other areas of ‘foods’ are less so. One definition of a ‘food’ might focus on the ability of a substance to be digestible, or act as source of energy, but clearly this is not sufficient for a healthy diet. There are numerous nutritional ‘cofactors’ which are essential for human health, included among which are a number of inorganic elements (principally metals, but also some other trace elements), and a group of vitamins.
Where do vitamins stand with respect to drug classification schemes? As small molecules which act as organic enzymatic cofactors for catalysis (coenzymes), the defined vitamins are an essential human dietary requirement, owing to our inability to synthesize them. But since they are not directly digestible themselves, they are classified by the FDA as ‘dietary supplements’, which are encompassed within the broader area of foods, and not drugs. Vitamins, then, fall into the this category and therefore would escape labeling as drug materials, unless they were chemically altered from the natural forms. Most dictionary definitions of ‘food’ also include vitamins. Even so, in other quarters vitamins have been clearly depicted as drugs. One basis for doing so is that vitamins can clearly cure diseases – but since the relevant diseases are deficiencies of the vitamins themselves, this would seem to be a special case. As we have seen with alcohol, assignment of a specific compound as a food does not mean it cannot also be a drug. But clearly there is a difference here: vitamins are essential for life, while alcohol (whatever some people might say) is not.
The figure below depicts two separate classifications where the vitamins are considered either as drugs (A) or not (B):
Two depictions of drug categorization and its overlapping areas. These are not to scale in terms of the relative sizes of the respective groups, and are schematic only. ‘Food / nutrients’: This refers to subtances which directly provide energy, structural building blocks, or essential assistance with normal metabolic functioning. ‘Dietary cofactors’ in general include both vitamins and inorganic substances (such as essential metals). ‘Proteins / macromolecules’: Not all macromolecular therapeutic agents are proteins, as for example nucleic acid aptamers. A, Vitamins considered as drugs. Some cosmetic preparations contain vitamins, so vitamins are shown to intersect with the ‘drug-overlap’ region of all cosmetics. B, Vitamins excluded from classification as drugs.
Does this cover everything? Well, there is an additional broad grouping of substances termed ‘nutraceuticals’, a hybrid term from ‘nutrient’ and ‘pharmaceuticals’. A nutraceutical in principle can be any food source component with biological properties outside of direct nutrition, but many of the best-known examples are antioxidants. Included among these are phytoestrogen compounds, considered in an earlier post. Resveratrol in particular (see the relevant Figure from this same post ) has generated enormous interest for its observed anti-ageing effects.
Where do these compounds reside in the above figure? Although they are by definition associated with some kinds of foodstuffs, they are neither directly digested (as for proteins, and digestible carbohydrates and fats), nor required for essential metabolism (as for vitamins). Therefore, it is logical that they be considered a subset of the large drug category, outside of the macromolecular subregion, as shown above. These compounds can be identified, purified, synthesized, and administered independently of their original sources. In this respect, they are no different from any other small molecule natural products derived from the biosphere.
Drugs as Foreign?
Can drugs be thought of as molecules which are ‘foreign’ to the body to which they are administered? (In other words, compounds which are not synthesized by the human or animal which receives them). In a strict sense, this would include vitamins which are dietary essentials through the lack of synthetic machinery for their production by a host animal or person. But there are problems with this proposal, and vitamins themselves are a case in point. For example, although Vitamin C is essential for human health (scurvy resulting in its absence), rats, mice and numerous other species have no problem making their own. Is Vitamin C then a drug for humans (capable of curing scurvy) but not for rats?
And numerous human proteins can be administered under circumstances where they can be considered drugs. Antibodies are an interesting case in point. Originally, monoclonal antibodies were of murine origin owing to the technological requirements of their production. The xenogeneic (foreign) nature of these proteins resulted in the induction of immune responses against the monoclonal antibodies themselves, when they were given to patients. In more recent times, fully human monoclonal antibodies have been developed, in order to circumvent this very significant problem. Yet an antibody of this type is still not literally and totally ‘self’, since its specific combining site is generated by recombinational and mutational mechanisms such that its exact sequence is not directly encoded in the human germline.
But non-variable molecules both large and small also come into this picture. Think of human growth hormone, of value for treating some forms of dwarfism – and sometimes abused for the purposes of body-building. Numerous other proteins and small molecule hormones can also be cited – so the notion of ‘foreign-ness’ for drugs in general becomes untenable.
Drugs, Poisons, and Doses
Drugs have been termed ‘poisons that save lives’, which carries the implicit message of the importance of dosage. But stating that ‘all drugs are poisons’ may be technically correct at a broad enough level, yet not particularly useful, given the vast differences in dosage ranges for efficacy vs. safety seen with different therapeutic compounds. Here a balance or ‘window’ must be found between the two poles of beneficial drug activity and unacceptable toxicity. The old saying ‘the treatment was successful, but the patient died’ provides an ironic testament to this inherent dilemma of drug pharmacology.
As an example of the great range of drug therapeutic windows which can exist, consider the treatment of syphilis. The pioneer of chemotherapy, Paul Ehrlich, found an arsenical compound (Salvarsan) which became an effective treatment for syphilis through its activity against the bacterial causative agent Treponema pallidum. But its toxicity at therapeutic doses was a major problem encountered in a high percentage of patients, so it was clearly not ideal. When penicillin became available in the 1940s, it was not only highly effective but also associated with very low toxic side effects. Indeed, the rising problem of bacterial resistance was initially countered by simply increasing the dosage of penicillin (or its many derivatives) without problems – but of course this soon becomes ineffectual as resistance increases. (Penicillins can actually induce serious problems through allergies in a minority of people, but this is quite distinct from direct toxicity).
There is also a piece of folk-wisdom along the lines of ‘too much of anything can hurt you’, which is certainly true for some natural nutritional requirements as well as drugs. In a general sense, too much food is clearly bad through the development of obesity, but the ‘dosage’ effects of nutrients can be observed in a much more specific manner. We can look within the set of vitamins once more for useful comparisons, which also demonstrate similar variation in the ‘safety’ windows of dosage as seen within artificial drugs. Vitamin C has exceedingly low (if any) toxicity, for example, and some people have routinely taken very high doses of it for long periods as part of ‘megavitamin’ therapy. On the other hand, the fat-soluble Vitamins A and D are unquestionably highly toxic when taken in excess of recommended daily requirements.
The dosage effect can also be related to the above observation that drugs need not be alien to the biochemistry and physiology of the patient (or animal) undergoing treatment. A pathology caused by a deficiency in a specific molecule can be corrected through artificial intervention. Conversely, certain pathological states may benefit from the provision of ‘unnatural’ administration of normal bodily proteins, such that the circulating amounts of the factor of interest are maintained for therapeutic purposes at higher levels than would normally be the case.
After this short foray into some issues surrounding the meaning of drugs, I’ll conclude with references to ‘nutraceuticals’ once more, by means of a biopoly(verse) note.
By analyses really quite shrewd
On mixtures both complex and crude
Smart chemists have shown
(And now it is known)
Natural drugs exist in some food.
References & Details
‘….. (FDA) defines drugs….’ For FDA definitions of both drugs and cosmetics, see the relevant page of the FDA site.
‘….set of guidelines for evaluating a candidate compound formulated by Lipinski…’ For a discussion of the basis of the Rule-of-five and moving beyond it, see Zhang & Wilkinson 2007.
‘…..field of drug delivery, which now overlaps with advances in nanotechnology.’ For a recent review of this topic in the cancer field, see Chidambaram et al. 2011.
‘….sodium lauryl sulfate….’ Also known as sodium dodecyl sulfate, this detergent also has wide application in laboratories as well as cosmetics.
‘…..a contender for the title of the oldest drug used by humans….’ Often alcohol is definitively cited as the oldest drug. I call it ‘a contender’ here since (as noted in an earlier post), the use of botanical medicines is also very old, and can even be linked with primate behavior (see a previous post on zoopharmacognosy). On the other hand, the use of alcohol (or abuse, depending on one’s views) is possible simply from natural cases of fruit fermentation, also seen with animals. So the origins of alcohol sampling by humans need not require any technology, and is undoubtedly of great antiquity.
‘……ethanol can also be metabolized……’ For an example of the differing influences of food vs. drug effects in an animal system, see Dole et al. 1985.
‘…..classified by the FDA as ‘dietary supplements’….’ For the FDA definitions of dietary supplements, see the relevant page of the FDA site.
‘…..vitamins can clearly cure diseases…..’ This was noted by Tulp et al. 2006, and that vitamins were thus ‘drugs by any definition’. Taken literally, this is clearly incorrect (one can simply exclude vitamins from a drug definition as dietary supplements, as for the FDA).
‘…..nucleic acid aptamers….’ (Figure footnotes). See a previous post for a brief consideration of RNA aptamers. From large libraries of variants, RNA molecules can be selected to bind desired ligands, and this can be used therapeutically. The first therapeutic aptamer (‘pegaptanib’) was directed against a specific form of vascular endothelial growth factor, for the treatment of ocular vascular diseases. See Ng & Adamis 2006.
‘……an additional broad grouping of substances termed ‘nutraceuticals’……’ For example, see Tulp et al. 2006.
‘……Resveratrol in particular …… has generated enormous interest for its observed anti-aging effects……’ See Pezzuto 2011.
‘……Vitamin C is essential for human health (scurvy resulting in its absence), rats, mice and numerous other species have no problem making their own.’ See Martí et al. 2009 for more information. The production of Vitamin C from glucose requires the enzyme L-gulonolactone oxidase, which humans, primates, and guinea pigs lack. Lachapelle & Drouin 2011 look at when this occurred in evolutionary time.
‘…..its exact sequence is not directly encoded in the human germline.’ Antibodies are composed of constant and variable regions, where the variation of the latter accounts for the vast range of different antibody binding specificities which can be induced by immunization. Particular variable region sequences allowing antigen recognition are specific to that immunoglobulin molecule, and are termed an ‘idiotype’. See Searching for Molecular Solutions Chapter 7 for a more detailed discussion of this.
‘ Numerous other proteins and small molecule hormones can also be noted…..’ Proteins such as interferons were noted in the previous post. Small molecules include adrenalin, thyroid hormones, and natural steroids. In all such cases, though, there is the potential (realized in many cases) for rendering such molecules ‘non-natural’ by various forms of artificial tinkering to improve their performances as drugs.
‘….pioneer of chemotherapy, Paul Ehrlich…’ See Thorburn 1983 for some biographical and other relevant information.
‘……certain pathological states may benefit from the provision of ‘unnatural’ administration of normal bodily proteins…..’ Again, see the reference to the example of interferons in the previous post.
‘ Vitamin C ….. ‘megavitamin’ therapy.’ The Nobel Prize-winning chemist Linus Pauling was a notable proponent for the efficacy of large Vitamin C (ascorbate) doses for conditions ranging from viral infections to cancer. (For example, see Pauling & Moertel 1986). However, no experimental evidence has validated these claims.
‘…..fat-soluble Vitamins A and D are unquestionably highly toxic….’ It is notable that the livers of certain polar animals (including bears and seals) are very rich in Vitamin A, and the eating of such livers by polar explorers has resulted in Vitamin A poisoning (hypervitaminosis A). See Rodahl & Moore 1943.
Next Post: Two Weeks from now.