Skip to content

Biological ‘Hit and Run’ Processes

September 27, 2012

This post brings up a topic noted previously  in the context of ‘subtle’ environmental toxic roots of disease processes. That prior discussion briefly raised the concept of an ongoing pathological process triggered by a transient exposure to a noxious environmental agent. In other words, in such a scenario, long after an initiating poison has done its handiwork, a disease process could continue to wreak havoc in a progressive manner. In the present post we look at the evidence for such ‘hit and run’ processes in a little more detail.

Kicking Off a Pathology

 In fact, if we cast a wide net for all instances of phenomena where a transient exposure to an agent results in long-term disease, much evidence can be amassed. This is exemplified by the induction of autoimmunity by toxic agents, which was discussed in another previous post. Two broad pathways for this kind of process were noted: either modification of self-antigens leading to damaging cross-reactive self-responses, or perturbation of normal immune cellular systems by toxic insult. In the first scenario, modification of a self structure by binding of a xenobiotic toxic agent results in the formation of ‘altered self’, which is capable (at least in the right genetic background) of triggering immune recognition. Unless suppressed by internal regulatory mechanisms, such recognition in turn will tend to result in clonal amplification of immune cellular subsets, and the formation of immune binding proteins (T cell receptors and immunoglobulins) which have been somatically tailored to ‘fit’ the offending immunogenic structure. If these recognition systems precisely reacted with solely the novel structures formed by interaction of a xenobiotic and a normal self-molecule, then perhaps it would always be self-limiting. (After a transient exposure to the toxic substance, the resulting ‘neoantigens’ would eventually be cleared from the host system, and receptors binding the neoantigens would be ‘out of a job’, but without causing any ‘collateral damage’ to the rest of the host). But evidence suggests that this is often not the case, since the newly-mounted immune responses can frequently target not only the original stimulating neoantigen, but also the original modified source structure, or even another related structure. If the novel somatic immune receptors formed in response to the neoantigen are then continuously maintained (or even further amplified) by the presence of the normal host structure, then a damaging autoimmune pathology may result.

Of course, the same self-structure at the center of such autoimmunity does not normally provoke such responses in isolation, because individual immune systems are ‘educated’ during an individual’s development to assign it as ‘self’. Immune systems ‘learn’ tolerance towards self-structures through removal of potentially self-reactive clones or active suppression mechanisms. The process of side-stepping these normal control layers is often referred to as ‘breaking tolerance’, and from this point of view the original toxic agent can be seen as a tolerance-breaker, whose action circumvents normal controls and initiates an ongoing pathological effect.

These points highlight the special features of normal life-saving adaptive immune systems which may be potentially subverted by a transient noxious exposure. But here the aim is to think about hit-and-run phenomena in more general terms, to include cases of apparent self-perpetuating inflammatory processes. In such cases, the immune system may be relevant at different levels, but is not necessarily identifiable as the chief instigator of the continuity of the process itself.

A basic but fairly obvious factor to keep in mind was also raised within the Notes section of a previous post with respect to the marine bioproduct ciguatoxin. This addresses the very issue of ‘transience’ of an environmental exposure: to be literally so, the agent responsible must be removed from the host organism after a reasonably brief period of time. Since some compounds can persist in vivo for extended periods as a consequence of their fat-solubility, resistance to metabolic processing, or both, this is not a trivial question. Still, even the most intractable compounds are gradually eliminated after ingestion, while after-effects of such exposures are in some cases believed to persist for over a decade at least. Such is the case with the neurotoxin MPTP (noted previously in the context of Parkinson’s disease), where ongoing neuropathology has been attributed to toxic insults incurred many years previously. Note, though, that inorganic fibers (such as asbestos) are a major exception to the scenario of gradual elimination, since these can persist for decades and provoke ongoing inflammation. (See a previous post for a little more detail in this area).

Mediators of  Self-Perpetuating Inflammation

Poisons are non-replicators, but can damage or interference begun by a specific toxic agent become a ‘replicator’ of a different sort, by the establishment of a self-perpetuating process? For evidence of this, we must look to degenerative processes with the central nervous system (CNS), which may be uniquely susceptible to this kind of pathological problem. This observation stems from two major factors: the vulnerability of the CNS to permanent loss of terminally differentiated non-dividing neurons, and the special status of the CNS with respect to the immune system. It is true that the brain and nervous system is more capable of self-renewal (‘plastic’) than formerly believed, but this still exists within fairly rigid boundaries. For example, once neurons within the substantia nigral region of the brain are lost in sufficient numbers, Parkinson’s disease results (see a little more detail in a previous post), and neuronal regeneration does not occur. With respect to the immune system, the CNS has long been considered to be ‘privileged’, as a consequence of its effective compartmentalization through the selectivity of the blood-brain barrier in permitting access by peripheral physiological components. More recently, it has become increasingly apparent that ‘privilege’ in this context is more akin to specialization: while immune responses in the CNS may be relatively isolated, they are nonetheless vigorous and significant.

Much of the cellular mass of the mammalian CNS is composed of non-neuronal cells termed glia. Just how much has been surprisingly controversial. A long-standing view held that glia outnumbered neurons in the human brain by tenfold or more, but relatively recent systematic studies suggest the glial / neuronal ratio it is more like 1 : 1, and thereby comparable with other primates. Glia themselves are compromised of a number of distinguishable cell types with various ascribed functions, broadly classifiable as macroglia (of which a number of subtypes are defined) and microglia. The latter in particular are functionaries of the innate immune system with a common lineage to peripheral monocytes and macrophages, and have been implicated in the mediation of inflammatory processes in the CNS.

The word ‘inflammation’ literally means ‘the act of setting on fire’, or stirring up a metaphorical ant’s nest. Indeed, we can refer to a speech or written article as being ‘inflammatory’ as a consequence of its effect on listeners or readers. In a physiological context, the ‘fire’ connotation relates to the activation of potent cellular and molecular alarm processes which have evolved to counter threats to an organism’s normal functioning. (Of course, long before any understanding of cells and molecules existed, the redness, heat and swelling of an infected area of the body was itself suggestive of ‘flames’). So, inflammation per se is primarily beneficial, and it is an essential aspect of the multi-variate responses mammalian organisms make towards pathogens or damaged self. But inflammation can also result in pathology, and this is very significant with respect to self-perpetuating disease processes.

All processes connected to the innate immune system require recognition of a triggering agent, which can either derive from a pathogen, or from the host itself as a consequence of tissue damage (‘pathogen /damage associated molecular patterns’; PAMPs and DAMPs respectively). This broad observation is thus applicable to the activation of inflammation. Microglia are activated by certain ‘danger’ signals occurring within the CNS, which is usually appropriate for management of a primary noxious stimulus. This only becomes a problem if the response cannot be turned off when no longer needed, suggesting a failure of regulatory mechanisms. Evidence for a pathological role of microglia in neurodegeneration has come through the use of the above-mentioned MPTP / Parkinson’s model. As noted in a previous post, considerable information exists regarding the direct toxic mechanism for neuronal killing in the substantia nigra after MPTP exposure, but this does not explain how short-term MPTP exposure could result in a sustained inflammatory pathology many years later, long after the initial offending chemical could no longer be present in any biologically significant quantity. Studies with long-term cell culture have reported that while MPTP has the expected initial direct toxicity on dopamine-producing neurons, only in co-culture of neurons and microglia is the effect progressive. Various studies are also suggestive of a role for microglia in human Alzheimer’s disease and other neurodegenerative conditions. For example, fragments of  the amyloid-beta protein (which forms abnormal deposits in Alzheimer’s disease) can activate microglia towards a pro-inflammatory state.

The first example of ‘hit and run’ circumstances was noted above in the context of autoimmune misdirection of the adaptive immune system, where cross-reaction with a self-antigen may set up an ongoing self-destructive process. It is then somewhat ironic that the immune system is implicated in another major class of self-perpetuating pathology, albeit at the different level of innate immune responses. But this assigning of blame to a misfiring of innate immunity in neurodegeneration still does not explain how the failure of regulation occurs.

Misfolded Missiles

Pathological inflammation involving microglial perpetrators and neuronal targets has been seen as a self-damaging feed-back loop, a vicious circle of sorts. In this model.  neuronal destruction abetted by microglia results in the release of various self-structures which maintain microglial activation, leading to succeeding rounds of neuronal destruction and a continuously damaging process. The molecules which potentially mediate such microglial activation may be diverse, ranging from membrane breakdown products to bioproducts which are normally sequestered in the neuronal cytoplasm. Among such self-derived potential activators, one category of particular interest is misfolded or aggregated proteins.

Aberrant protein folding or the formation of abnormal protein deposits is a hallmark of a range of distinct neurodegenerative conditions. In this regard, the prion diseases are a striking case in point. The nature of prion diseases was discussed in a post of the 5th April 2011 in the context of biological ‘dark matter’, where it was noted that prions represent remarkable instances of ‘transmissible conformations’. A prion is an alternatively folded variant of a normal protein, which can induce the normal form to assume the prion-specific fold. The roles of prions in general biology are becoming increasingly recognized, and there are precedents (especially in yeast) where prions appear to be biologically and evolutionarily useful. But within the human CNS, prions are very bad news indeed, as the causative agents of several uniformly fatal diseases. These include the sporadic condition of Creutzfeldt-Jakob disease, and the notorious transmission of ‘mad cow disease’ to humans through consumption of prion-contaminated meat.

While it is generally (albeit not quite universally) accepted that prion proteins mediate disease through a conformational replication mechanism, this in itself does not explain how the observed pathogenic effects occur. Different prion diseases can have different gross pathological effects. Creutzfeldt-Jakob and ‘mad cow’ diseases are both classed as ‘spongiform’ encephalopathies owing to the sponge-like rendering of diseased neural tissue, but the two are nonetheless histopathologically distinguishable. In certain other prion diseases, the spongiform effect is absent or much reduced, but not the fatal neurological outcome. There is evidence that pathogenic prion accumulation can directly destroy neurons through the induction of specific cell death (apoptosis), but inflammatory processes and aggregation of microglia may also play a role. The full picture of how such protein-mediated diseases progress is undoubtedly complex and far from fully defined.

Nevertheless, the link between prion diseases and other neurodegenerative conditions through suspected inflammatory processes may be reflected at a more fundamental level. Evidence has accumulated to the effect that prion-like effects may be a factor in neurological diseases which are far more common than the relatively rare known prion encephalopathies. Phenomena suggestive of prion mechanisms have thus been reported for Alzheimer’s disease (with respect to the amyloid beta protein) and Parkinson’s disease (in the context of the protein a-synuclein). Indeed, a central role for prions has recently been posed as a unifying hypothesis for all neurological disease, although it remains unproven at this point, at least where prions are defined as fully infectious proteinaceous agents in isolation. If essentially correct, though, this view would provide a mechanism whereby an inflammatory process is maintained continuously. Ongoing prion-based replication and spreading could provide a steady source of ‘damage’ signals that may serve to activate microglia, and thus promote inflammation and a self-perpetuating cycle of host cell destruction. (Cell death then could result from a combination of both direct prion effects and continuously induced deleterious host responses).

The prion link has a certain logical appeal when attempting to explain a truly self-propagating disease process. If an inflammatory response continues indefinitely solely in response to release of ‘danger’ signals from intracellular sites (which are normally shielded from immune cells), it is not obvious how any inflammation site can escape falling into the same trap. Or at least, one might expect that self-perpetuating inflammation would be even more of a problem than it is. One potential answer to this (as noted above) proposes that self-propagation of inflammation results from a failure in homeostatic immune regulatory mechanisms, which might itself be induced by toxic environmental exposures. Yet this is still less than satisfying as an explanation for how a short-term environmental exposure could result in an indefinitely persisting pathology. After removal of a toxic agent, why should cellular regulatory networks not regain their function and exert normal controls once more? For example, after strong suppression of HIV from patients with high-activity antiretroviral drugs, the CD4+ helper T cell targets of the virus rebound and immune functioning is restored.

This is not to say that other mechanisms for long-term regulatory perturbations from transient exposures are impossible, but invoking a potential role for prions seems consistent with many current observations. While misfolded proteins were noted above as but one of many potential classes of ‘noxious self molecules’, the subgroup of ‘noxious self’ known as prions have the unique ability to propagate their undesirable misfolded conformation in a manner other structures cannot.

What Induces Prion Processes?

Now we come to a central part of this post, where previous thoughts about ‘subtle’ environmental toxic agents can be linked with the concept of self-propagating hit and run processes. The underlying premise is that prions cause certain fatal neurological diseases (no longer controversial) and possibly contribute to all neurodegenerative conditions (still an unresolved matter). From this stance, we can question how prion misfolds are generated in the first place, and how this can be affected by environmental influences. Firstly, it should be noted that known prion diseases can have a clear-cut genetic basis, owing to the increased propensity of certain mutant forms of the cellular prion protein to adopt different versions of self-transmissible pathological prion forms. But these are very rare, and most human prion disease is classed as ‘sporadic’, or occurring without definitive known causes. The latter conditions themselves are (fortunately!) not at all common, only appearing at a rate of about 3 per million population.

Then we should note that ‘sporadic’ cases are often distinguished from those which are acquired in known ways from an external source. While this is a somewhat arbitrary split, it does provide a useful divide between cases where a pathogenic prion may emerge spontaneously from some endogenous cause, and those where the prion replication cycle is initiated by some external agent. While the focus within this post will thus be on the ‘acquired’ set of prion diseases, it will be useful also to spend a little time thinking about how prion conversion to the pathogenic form may ‘spontaneously’ appear. An important initial matter to note is that the incidence and form of sporadic prion disease is strongly influenced by a specific polymorphism within the normal cellular prion protein gene. In this case, a single nucleotide difference determines whether codon 129 specifies a methionine (M) or a valine (V). Sporadic Creutzfeldt-Jakob disease is manifested in distinct subtypes, which are associated with particular codon 129 genotypes (M/M or V/V homozygotes, or M/V heterozygotes; present at 52%, 12%, and 36% of Caucasian populations respectively).

By the nature of the prion conversion process, a variety of ways can be proposed by which the pathogenic conformational switching could be sporadically triggered. In all such cases, it must be kept in mind that only a short-lived presence of a triggering event, whether associated with a variant-sequence prion form or not, would in principle be required. This could occur in a single replicating CNS cell through a somatic mutation in the normal prion gene, which predisposed the corresponding expressed protein towards the aberrant prion conformation. It could also take place through errors at the transcriptional or translational level, or even through translational pausing (a temporary slow-down of translation rate into protein from an mRNA template), which could affect the folding of the resulting protein product. Any of these effects would likely produce only a very small amount of altered prion protein, but a small quantity might be enough to set the deadly cascade in motion. Once progression is enabled, conformational conversion could proceed with altered normal protein, superseding any requirement for further input from the original triggering prion forms.

But yet another interesting possibility comes at the level of protein chaperones which assist the folding of normal cellular prion proteins. If such assistance temporarily ‘goes bad’ for whatever reason, then it is easy to envisage that a pathogenic prion fold might emerge, which could then begin propagating itself by hijacking normally-folded forms into the same disease-causing structure. This theoretical possibility suggests that the stakes are considerably higher for prion folding than any other proteins. While an ordinary protein that eludes chaperone help and becomes misfolded will be simply degraded and recycled, a misfolded prion might have potentially lethal consequences for the entire organism. Some workers have considered the possibility that prion misfolds per se may be quite frequent, but usually kept under control by immediate chaperone binding and re-folding. But regardless of whether or not de novo prion misfolding is very rare or commonplace, chaperones can be seen in this case as a primary surveillance mechanism, safeguarding against the formation of an ‘enemy within’. In this respect, it is hard to avoid making an analogy between this molecular ‘chaperone surveillance’ and immune surveillance mediated by the adaptive immune system towards the formation of cancer cells. In both cases, a surveillance mechanism serves to forestall the development of potentially ‘killer replicators’. Although at very different levels and in very different ways, both cancer cells and prions blindly replicate themselves at the expense of their hosts, often with fatal results when the natural surveillance mechanisms break down or are overcome.

In any case, the putative role of prion chaperones then leads into the area of ‘acquired’ prion disease, since the possibility has been raised (though apparently little investigated) that xenobiotics might at least in some circumstances have a role in subverting prion chaperone assistance, and thereby indirectly promote prion disease. What else can be said about ‘toxic’ acquisition of the pathogenic prion phenotype?

Toxic Induction of Pathogenic Prions?

Firstly, it is necessary to note that ‘acquired’ prion disease almost always refers to a pathogenic origin through the ingestion of infectious prions present within environmental sources. This has usually been associated with eating prion-contaminated materials, ranging from human brain during cannibalistic rituals of certain New Guinea tribes (with the acquisition of the neurodegenerative ‘Kuru’ form of Creutzfeldt-Jakob disease), to eating meat contaminated with bovine prions (leading to the above-mentioned ‘mad cow’ disease, or ‘new variant’ Creutzfeldt-Jakob disease). Yet it has also occurred through contamination of medical treatments, such as corneal grafts or naturally-derived human growth hormone prior to the availability of recombinant sources free of possible prion contamination.

If we defined a ‘toxin’ or ‘poisonous agent’ very loosely, then the above cases would be classifiable as environmental toxic prion-triggering agents, but this of course is not quite what is mind. Can a non-protein compound produce the same effects? Here we must keep in mind that this could also have implications for general neurodegenerative diseases with ‘prion-like’ aspects, as well as for fully certified prion pathologies. As noted above, a xenobiotic might promote pathogenic prion misfolding via chaperone interference, but in principle this xenobiotic action could also underlie any mutational, transcriptional, or translational mechanism (as also mentioned above). Any such indirect action would be highly unlikely to have any specificity towards the prion gene or protein, but could in principle allow transient production of the undesirable prion protein conformation. Such theoretical events then are highly stochastic (chance-based) in nature, resulting as a product of the chance of (say) a mutagen affecting a potentially vulnerable CNS site multiplied by the chance of it then producing a somatic mutation in the prion gene, in turn multiplied by the chance of the mutant gene being expressed as misfolded prion protein, which then goes on to enforce its conformational and deleterious replication. Similar statements can, of course, be made regarding the chances of acquiring somatic mutations which lead to cancer. (See a previous post regarding the dual aspects of tumorigenesis vs. neurodegeneration). The end result means that mutagen-induced prion pathology must be exceedingly rare, but may occur within a large enough population of ageing individuals. (The longer an individual lifetime, the greater the chances that such untoward events may happen).

But can a non-proteinaceous xenobiotic directly target prion proteins for misfolding, just as an environmental pathogenic prion ‘seed’ directly acts on endogenous normally-folded cellular prion protein? The notion of ‘seeding’ in this context has been viewed as a kind of templating or nucleation process, where non-prion templates have been referred to as ‘heteronucleants’ or ‘molecular casting’. In an interesting study over a decade ago, it was reported that infectivity of a hamster-adapted pathogenic prion could survive such high temperatures (600˚ C) that at least a subcomponent of it may correspond to some form of inorganic templating agent. (All proteins would be destroyed under such conditions). Under some circumstances, DNA strands can induce the formation of amyloids (extended alternative fibrillar conformations of certain proteins, with similarities to pathogenic prion protein aggregates). In either case, it might be presumed that an extended template structure is required to effect the necessary conformational conversion, which in simplistic terms involves converting protein α-helical structures to extended β-sheets. This kind of requirement might suggest that no low-molecular weight compound of modest dimensions could directly elicit prion conformational conversion. But the full picture is not so simple.

It is known that mutations which predispose towards the formation of amyloid structures tend to destabilize a target protein’s normal conformation. In an analogous manner, specific binding of a low molecular weight compound by a susceptible protein might promote prion or amyloid conformational conversation through destabilization of the normally soluble conformation. Such an activity could be contrasted with the known facility of certain ‘chemical chaperones’ for stabilizing or promoting the formation of the correct conformation of otherwise unstable mutant proteins. The question as to whether putative chemical ‘anti-chaperones’ capable of promoting pathogenic prions or amyloids are a real potential threat is not clear, but an increased understanding of this theoretical molecular pathology would be useful. For example, epidemiological studies of Creutzfeldt-Jakob disease have found case clusters at greater rates than can be attributed to chance, attributed to ‘a common environmental factor’. The figure below is provided as a way of summarizing several prion conversion scenarios, both known and hypothetical:

Legend to Figure. Depiction of different pathways leading towards conversion of the normal cellular prion protein fold into the transmissible pathogenic form. The normal folded structure of the human prion protein is shown at the top, alongside the amyloid (replicable) form of a yeast prion (HET-s). For both, α-helical regions of proteins are red, β-strands green, loops light blue. Prion conversion typically involves a shift in α-helical conformation into β-sheet, which is often associated with the formation of aggregates or fibrils. (Source of structures: Protein Data Bank; 1QLX normal human prion; 2RNM yeast HET-s. Images generated with Protein Workshop).

A: Effects of a familial mutant prion protein. Here the mutation (light blue oval) acts in destabilizing the normal form, which then is pre-disposed towards adopting the transmissible conformation, capable of converting either mutant form or wild-type protein (relevant for wild-type / mutant heterozygotes). B: Sporadic conversion: Here a variety of endogenous rare events (as noted above) can potentially convcrt a small amount of prion, which can then act to ‘seed’ ongoing production of the pathogenic form. C: Acquired conversion. Here foreign prions ingested from environmental sources (such as contaminated foods or medical procedures) can establish the transmissible cascade. The hypothetical potential of other (non-prion) templating molecules with an extended structure is also depicted. D: Hypothetical action of a specific small molecule in binding and destabilizing normal prion protein, and leading to the same transmissible chain of events. In this case, the hypothetical compound chemically recapitulates the same effects of a deleterious mutation as in (A).


Understandably, a great deal of research has focused on finding low molecular compounds which can reverse pathological protein misfolding, rather than how to promote it. But such considerations must be reserved for a later post. To finish up, here is a (biopoly)verse musing on a biological ‘hit and run’ instigator:

 A prion conformational switch

Needs a trigger, they tell me, but which?

It seems human prion conversion

Is conformational perversion

And a sadly transmissible glitch

References & Details

(In order of citation, giving some key references where appropriate, but not an exhaustive coverage of the literature).

‘…..the induction of autoimmunity by toxic agents…..’    See a previous post for examples and references.

‘….after-effects of such exposures are in some cases believed to persist for over a decade. Such is the case with the neurotoxin MPTP…..’     See Langston et al. 1999.

‘…..the brain and nervous system is more capable of self-renewal (‘plastic’) than formerly believed….’     This was popularized by Norman Doidge in the book The Brain That Changes Itself (2007, Viking Press).

‘….that [immune] ‘privilege’ in this context is more akin to specialization….’     See Gao & Hong 2008.

‘…..long-standing view held that glia outnumbered neurons in the human brain by tenfold or more, but relatively recent systematic studies suggest the glia / neuronal ration it is more like 1 : 1.‘     See Azevedo et al. 2009.

‘……the activation of inflammation…..’      See Strowig et al. 2012, especially with respect to the ‘inflammasome’, a protein complex associated with inflammatory responses.

‘…..MPTP exposure could result in a sustained inflammatory pathology many years later…..’      This also refers to the work of Langston et al. 1999. See also McGeer et al. 2003, for similar effects observed in monkeys.

‘……while MPTP has the expected initial direct toxicity on dopamine-producing neurons, only in co-culture of neurons and microglia was the effect progressive….’      See a review by Gao & Hong 2008.

‘……a role for microglia in human Alzheimer’s disease and other neurodegenerative conditions.‘     See Blasko et al. 2004Lull & Block 2010.

‘…..fragments of  the amyloid-beta protein (which forms deposits in Alzheimer’s disease) can activate microglia towards a pro-inflammatory state….’     See Blasko et al. 2004; Frank-Cannon et al. 2009; Lull & Block 2010.

‘……there are precedents (especially in yeast) where prions appear to be biologically and evolutionarily useful.‘     See recent work from the lab of Susan Lindquist regarding the biological utility of prions in wild yeasts: Halfmann et al. 2012.

‘…..evidence that pathogenic prion accumulation can directly destroy neurons through the induction of specific cell death (apoptosis), but inflammatory processes and aggregation of microglia may also play a role….’     See Cronier et al. 2004; Kovacs & Budka 2008.

‘ …Evidence has accumulated to the effect that prion-like effects may be a factor in neurological diseases which are far more common than the relatively rare known prion encephalopathies.‘     See Barnham et al. 2006; Moreno-Gonzalez & Soto 2011;

Behavior suggestive of prion mechanisms has thus been reported for Alzheimer’s disease (with respect to the amyloid beta protein) and Parkinson’s disease (in the context of the protein a-synuclein).’      For example, see Nussbaum et al. 2012 and Angot et al. 2012 respectively.

‘…..a central role for prions has recently been posed as a unifying hypothesis for neurological disease, although it remains unproven at this point…..’      This was put forth by Stanley Prusiner (Prusiner 2012), who won a Nobel prize for his prion discoveries. But see Lahiri 2012 for a cautionary note.

‘……..a failure in homeostatic immune regulatory mechanisms, which might itself be induced by toxic environmental exposures…’     See a paper of Vancheri et al. 2000. Note that while this study invokes induced failure of regulatory networks, it is most relevant to pathologies with persistent noxious stimuli producing fibrotic responses.

‘……[prion diseases] only appearing at a rate of about 3 per million population….’ See Safar 2012.

Sporadic Creutzfeldt-Jakob disease is manifested in distinct subtypes…..’      See Gambetti et al. 2003.

‘…….a variety of ways can be proposed by which the pathogenic conformational switching could be sporadically triggered….’     See Moreno-Gonzalez & Soto 2011; Safar 2012; Watts et al. 2006.

‘……the possibility has been raised ……. that xenobiotics might at least in some circumstances have a role in subverting prion chaperone assistance….’     See Reiss et al. 2000.

‘….prion misfolds per se may be quite frequent, but usually kept under control by immediate chaperone binding….’     See Kovacs & Budka 2008.

‘…..immune surveillance mediated by the adaptive immune system towards the formation of cancer cells.‘    See Dunn et al. 2004 (no relation) for a discussion of immunosurveillance and related areas.

‘…..the degenerative ‘Kuru’ form of Creutzfeldt-Jakob disease…..’    See Gajdusek 2008.

‘…..non-prion templates have been referred to as ‘heteronucleants’..’     See Gajdusek 2008.

‘……it was reported that a hamster-adapted pathogenic prion could survive such high temperatures (600˚ C) that some form of inorganic templating agent must exist…’     See Brown et al. 2000.

‘…..DNA strands can induce the formation of amyloids….’     See Giraldo 2007; Fernández-Tresguerres et al. 2010.

‘……mutations which predispose towards the formation of amyloid structures tend to destabilize a target protein’s normal conformation…..’     See Hammarström et al. 2002.

‘……the known facility of certain ‘chemical chaperones’ for stabilizing or promoting the formation of the correct conformation…..’      For a review of ‘chemical chaperones’, see Leandro & Gomes 2008. Note also that chemical chaperoning was referred to in passing in the post of 29th March of this year, with respect to the rescue of a human defective alcohol-processing enzyme by such means (this was described in a paper by Perez-Miller et al. 2010).

‘……epidemiological studies of Creutzfeldt-Jakob disease have found case clusters at greater rates than can be attributed to chance, attributed to a common environmental factor….’      See Linsell et al. 2004.

‘…..a great deal of research has focused on finding low molecular compounds which can reverse pathological protein misfolding….’     For a review of this very large field, see De Lorenzi et al. 2004.

Next post: Early December.

Another Level of Xenoprotection

July 30, 2012

The recent series of posts have featured different levels of recognition of environmental poisons and related defense processes, ranging from taste receptors to drug export mechanisms. But one layer has not been addressed so far, and this is the present theme.

 Evicting Poisons

 This oversight on my part was drawn to my attention by no less than a domestic cat, not normally known for paying much attention to blogs of any description. In my presence, this little animal suddenly vomited up half a cigarette, an improbable addition to what most observers would consider a healthy feline diet. Forgive my bringing this up, so to speak, but it did serve as the springboard for further thoughts. It’s not clear what the consequences of ingesting cigarettes would be for cats, but presumably it would not have much nutritional benefit. But whatever prompted eating this suspect item in the first place (which we’ll consider a little further towards the end of this post), the cat’s stomach (or at least some part of its digestive apparatus) strongly and emphatically pressed a metaphorical reject button, and saved this pet from perhaps an unpleasant nicotine-related encounter. (Rest assured, she was none the worse for the experience, and didn’t have to clean up the mess).

In previous posts (28th March and 30th May of 2012), the bitter taste receptors were considered as a frontline defense against ingestion of environmental poisons. Anything getting past this guard level then can potentially be neutralized through a variety of xenorecognition and xenoprocessing mechanisms (also considered in a previous post).  Indeed, but in between lies another level of defense, as any cigarette-eating cat could show you. Bad things that get past the oral cavity into the stomach can potentially be prevented from proceeding to do harm to the whole organism, if they can be physically ejected as soon as possible. Regurgitation can at least greatly reduce a toxic load, potentially bringing down the exposure to levels manageable by other xenoprocessing mechanisms, and thereby having life-saving (and in turn, evolutionary fitness) implications. This area might seem trivial, but further thought shows that it certainly is not. Regurgitation is a complex and coordinated series of muscular actions, which clearly must have some kind of trigger to initiate. What external agents then stimulate this response, how are they recognized, and how is the resulting reflex produced?

Emetogens and Their Receptors

 A particular focus of attention in the field of emesis has arisen as a result of empirical results in cancer chemotherapy over decades of its application and continuous refinement. Put simply, in the absence of simultaneous anti-emetic treatments, some anticancer drugs are highly emetogenic (inducing nausea and vomiting), but there are marked differences in their relative potencies in this regard.  For example, the well-known drug cisplatin, a tremendous advance in the treatment of certain tumors, is nonetheless notorious for its emetogenic effects. On the other hand, drugs such as vincristine and bleomycin are very low in inducing this highly unpleasant side-effect, although they certainly must be administered with great care due to their toxicities. (Conventional anticancer drug cytotoxicity typically has low selectivity towards tumors, and thus any dividing host cell may be potentially affected as ‘collateral damage’).

Much clinical research has understandably focused on ways for minimizing the distressing induction of emesis by the necessary anticancer regimens. Effective anti-emetic drugs target relevant neural receptors (such as the 5-hydroxytryptamine(serotonin)3 receptor) involved in transmission of the emetogenic signals. As one would expect for a complex behavior pattern, emesis is ultimately controlled by the brain. In terms of the complexity of emetic effects, it should be noted that in addition to specific substances, vomiting can be induced by pregnancy or physical stimuli (as with motion sickness) or arise from psychogenic origins (consider any distressing influence which is literally sickening). At one time, a specific neural center was postulated to act as an emetic controller, but more recent evidence suggests cooperating regions of the medulla oblongata (in the hindbrain) are involved. Input signaling implicates a region of the medulla called the Area Postrema, which very significantly is not restricted by the blood-brain barrier, and thereby able to potentially sample blood-borne xenobiotics. In addition, other evidence suggests emetogenic primary signaling originates from intestinal sites. Gut vs. blood-borne sensing might be viewed as two separate levels of emetogenic detection, since orally ingested poisons will normally encounter the gut receptors first. Nevertheless, in both cases the chemosensing and neural transduction of signals have common results.

Yet this information does not directly address the nature of the chemoreception which transduces toxin-induced emetic signaling in the first place, and it is apparent that there is still much to be learned in this area. It would seem reasonable to postulate a role for bitter taste reeeptors in this signaling process, based on the assumption that specific chemoreceptors are involved. This follows from relatively recent observations showing that the TAS2R bitter receptors are not only expressed in taste buds, but at a number of distinct anatomical sites, including the gut and the brain. (This was also alluded to in the previous post). More indirectly, the redeployment of a primary xenoreceptor set in a second-round protection mechanism would from first principles appear to be a parsimonious evolutionary pathway.

Still, no evidence appears to support this proposal at present. But if TAS2R receptors were involved, it might be predicted that at least a broad correlation would exist between the perception of bitterness and emetogenicity of a compound. (In other words, this would propose that the more bitter a compound, the more it would tend to induce emetic effects). But this proposition can immediately be challenged on several grounds. Firstly, emesis can be induced by sufficient concentrations of simple salts (such as lithium chloride or copper salts), which do not engage bitter taste reception. And secondly, no evidence suggests any significant correlation between the degree of bitterness and emetogenicity of a compound, although systematic information in this regard seems to be lacking. One problem in this regard is measurement of emetogenic potential itself, and its variation between species. (Obviously, human experimentation in this area can have many ethical constraints). But the absence of discernable linkage between bitterness and emetic potency is conveyed through the bitterest known compound, ‘denatonium’, an artificial derivative of the anesthetic lidocaine. Despite this compound’s intense bitterness, it has low toxicity relative to many natural bitter substances (noted further below). While denatonium salts are likely to induce emesis if the dose is high enough, this question does not appear to have been systematically studied. But at least, if the emetogenic signal paralleled the bitter perception, denatonium would also be the most potent emetogen, and there is certainly no evidence for this. For another piece of relevant information, the low emetogenicity of the anticancer drug vincristine (noted above) is notable with respect to its nature as a bitter-tasting plant alkaloid. Therefore, bitterness per se and emesis cannot be closely associated.

Nevertheless, these observations do not rule out a role for bitter taste receptors in emesis, since many complicating factors might cause divergence between the perceptual signaling of bitterness, and signaling from the same receptors in different physiological sites. For example, both the range of specific TAS2R receptors and their signaling transduction mechanisms might differ between oral and gastric or brain receptors, such that a strong bitter signal does not necessarily produce an analogously strong emetic response. Additional taste receptors beyond the TAS2R set might also be involved, as a possible explanation for emesis induced by salts (also noted above).  Thus, as in a great many areas of biology, only a positive read-out here is very useful. (In other words, if a very strong correlation between perceptual bitterness and emetogenicity did exist, it would certainly be consistent with the use of TAS2R receptors in both contexts – but even this, of course, would require more direct information before being proven).

Emetic Signaling

In a general model of signaling which leads to emesis, cells receptive to chemical or other stimuli secrete neurotransmitters upon activation, which in turn activates adjacent neural signaling cells, with resulting common higher-level sensation and behavioral outcomes (nausea and vomiting). By such means, similar effects can be elicited by diverse signals, ranging from a variety of chemicals (from inorganic salts to complex organic compounds), to disagreeable motion stimuli and psychogenic causes. This arrangement has a certain logic to it, since it is unnecessary for the final results (emesis) to qualitatively differ as a consequence of different origins. In this sense, the emetic signaling may be considered convergent from different receptors and different neurotransmitters towards a common neural response. This can be contrasted with the sense of taste, which has both divergent and convergent aspects. With respect to the latter, a wide range of different compounds activate TAS2R bitter receptors, and different sets of compounds (albeit probably less diverse) also converge on activation of sweet receptors. But since the biological functions of bitter and sweet sensing are radically distinct, it would make no sense for their sensory output to converge, and this is obvious from experience. (It is also consistent with recent studies showing divergent brain regions activated by the respective types of taste stimuli).

Since antagonists of relevant neuroreceptors (signal blockers) are effective anti-emetics, it might be expected that corresponding agonists (signal activators) should be strong emetic agents. Such agents would then directly potentiate the signaling neural cells, rather than indirectly via chemoreception (for example) and specific neurotransmitter release. While not false, such reasoning is nonetheless simplistic, since specific neurotransmitters typically bind not just one but a family of receptors, each of which can transduce distinct signaling outcomes. The activity of an agent then is greatly dependent on its specificity for a particular receptor subtype, and the nature of its interaction. Yet there are certainly precedents. As noted above, many anti-emetic drugs target the 5-hydroxytryptamine3 receptor, and an agonist of this same receptor, phenybiguanide, is (among other pharmacological properties) a strong emetogen. Neurotransmission triggered by the peptide mediator cholecystokinin also is involved in emesis, and a particular cholecystokinin variant (CCK-8) is a highly potent emetic in humans, far more so even than the most active cancer cytotoxic drugs.

In this brief overview, the possible role of taste receptors in emesis has been considered, but olfactory receptors might also be implicated in humans. In this case, associated serotonin release again provides a mechanistic convergence with above-noted emetic signaling processes. Certainly some chemicals can invoke a nauseous response simply from exposure to their volatile odors (pyridine is one example that comes to mind, from personal experience).

Non-emetic Mammals and Behavior-driven Xenoprotection

While considering the role of emesis as another level of xenoprotection, one must account for circumstances where it is absent. This is well-demonstrated by rats and mice, whose physiology does not permit the emetic reflex. It has been suggested that these rodents side-step the need for vomiting to some extent through highly sensitive food sampling behavior, and conditioned avoidance of foods which have undesirable effects. Failing this, such animals have been shown to ingest inorganic materials (especially clays), which act as adsorptive detoxifying agents, a behavior termed pica. The interesting parallel between pica and emesis is shown by experiments where rat pica is induced by emetogens and mitigated by anti-emetic drugs. Given these observations, both learned food avoidance and pica emerge as xenoprotective strategies, where higher-level behavior patterns are crucial elements. Conditioned food avoidance in rats has been associated with chemosensing in the Area Postrema, noted above as an important signaling center in emetic animals. Pica has certain conceptual overlap with ‘zoopharmacognosy’ (considered in detail in a previous post, where animals ‘self-medicate’ by consuming environmental bioproducts (principally plant materials) for health-related reasons. Such innate behavior patterns have clear survival value, and would be positively selected on that basis.

Given the proposed increased reliance of rats on primary taste sensing for detecting (and subsequently avoiding) noxious substances, it is of interest to note apparent strong divergences between rat and human bitter taste perception. In particular, the above-mentioned denatonium, exquisitely potent as a bitterant as measured by human sensing, is markedly less so in rats. This is clearly evident through a practical use of denatonium salts as safety additives to rat poisons, in order to help prevent accidental consumption by humans (especially children, whose aversion towards bitterants tends to be stronger than adults). Obviously, this strategy would fail if rats were as sensitive towards the intense bitterness of denatonium as are we humans ourselves. From a rat’s point of view, this might seem unfortunate, but in reality, at least in this specific instance the rat bitter taste responses are much more in tune with the actual toxicity of denatonium. (The human perception of denatonium is far out of proportion to its toxicity, as noted a little further below). It would be interesting to see if the bitter taste perceptual repertoire of rats in general has a better correspondence with actual chemical toxicity than that shown by human responses. This too would be in line with more intense selection pressures on rat bitterant tasting than for primates during the evolutionary past.

In humans, bitterness vs. toxicity can be addressed by comparing thresholds of bitter taste with toxic responses for a wide range of compounds. Assessing the outer limits of bitterness can usually be done (with highly dilute solutions of test compounds), but lethal dosages can only be obtained through accidental poisonings, which obviously are both undesirable and poorly controlled. The situation is almost the opposite with rats, where controlled toxicity testing is a standard laboratory practice, but rats generally have trouble reporting when they first can perceive bitterness in a dilution series of a compound. In lieu of this, minimal chemical concentrations creating aversion can be tested, but this is not the same thing as assaying the lowest concentrations perceivable. More sophisticated testing is possible with in vitro assays for triggering of human vs. rat taste receptors, but this is at the level of primary signaling rather than perceptual awareness. An example of some assembled literature data is shown below, incomplete for the rat, but which partially illustrates the disconnect between human perception and toxic response for denatonium.

Top graph: Human bitterness indices for denatonium, strychnine, and brucine, equivalent to bitterness thresholds for each, normalized to that for quinine (i.e., where quinine bitterness index =1), compared to available information on approximate lethal adult dosages. (Note log scale on X-axis). These are compared with rat laboratory toxicity indices (LD50 values also normalized to that for quinine). The table below shows the original figures for calculating the indices. Note apparent differential susceptibilities for brucine vs. strychnine for humans and rats.

Sources: General: NCBI toxnet; Taste Perception in Humans, from Neuroscience. 2nd edition. Purves D, Augustine GJ, Fitzpatrick D, et al., editors. Sunderland (MA): Sinauer Associates; 2001; The Alkaloids: Chemistry and Physiology, Volume 43. Geoffrey A. Cordell, Richard Helmuth Fred Manske Eds; Academic Press 1993. Also Hansen et al. 1993 (denatonium benzoate). Where appropriate, values shown here have been taken as the midpoints of measured experimental ranges.


In any case, with the inclusion of both conditioned aversion and the pica ‘toxic sequestration’ strategies, we can now define a broader picture of xenoprotection, as depicted below:

Schematic depiction of different levels of xenodefenses. A: Avoidance of noxious materials via aversive taste responses, which includes conditioned avoidance as observed with rats; B: Ejection of poisons via emesis, whether emetic sensing occurs within the gut or via sensing of blood-borne compounds; C: Sequestration of ingested poisons by ingestion of clays or related materials (pica); D: Internal xeno-defenses, as considered previously. For detail, see relevant previously-posted diagram, from the post of 28th March, 2012.


The paradox of bitterness

So far we’ve seen that, in humans at least, bitterness correlates poorly with the potency of chemical emetogenicity. But if we consider the perception of bitterness in its entirety, it becomes clear that it is an imperfect correlate with aversion itself, which is its accepted direct evolutionary rationale. It has been thus noted that complete avoidance of absolutely all bitter substances would have negative nutritional consequences. But if certain environmental compounds are potentially useful, why should these register as bitter in the first place? After all, bitterness is a perception resulting from triggering of specific receptors, not an inherent property of a molecule, so for what reason should a useful molecule be thrown into the same ‘bitter’ grab-bag as for a motley collection of poisons?

One issue in at least a subset of cases could be the existence of similarities in molecular shape between potentially useful compounds and wholly deleterious poisons, such that they are recognized by the same range of TAS2R bitter receptors. While evolution of receptors capable of discriminating even subtle molecular differences is possible in principle, such changes may be constrained in practice by lack of effective selective pressures. But in any case, a better evolutionary result (as dictated by fitness benefits) might simply be more nuanced perception related to the strength of the bitterness signal. A low-level bitter taste (especially when other tastants are also present) might overlap with a pleasure response in some circumstances. So a weakly bitter (but possibly useful) nutrient might then be consciously ingested, but the background bitterness would serve to limit overdosing. Certainly in human adults, a certain amount of bitterness in food or drink is often prized. Among many possible examples, the alkaloid quinine (long employed as a treatment for malaria, as noted in a previous post) is still used as a bitterant in certain drinks, including bitter lemon or tonic water. Given that the preference for this kind of additive is not everyone’s ‘cup of tea’, the variation therein probably arises from a combination of both genetic differences in taste receptor repertoires and positive conditioning towards acceptance (development of an ‘acquired taste’). But there are levels of bitterness beyond which no normal human will voluntarily go. It was for that reason that reference to bitterness as an aversive factor in previous posts often included the adjective ‘intensely’, to distinguish such uniformly negative perceptions from lower-grade bitterness which in some people provides a pleasurable stimulus.

Finally, the ‘bitterness paradox’ prompts a loop-back to the cat and the cigarette which initiated this post. Despite the bitterness and potential aversive power of tobacco, it remains a possibility that it was consumed from an instinctive drive towards ingesting potentially anti-parasitic compounds. If so, it might be case of innate feline zoopharmagnosy. Indeed, there is evidence that leaves of the tobacco plant have certain antiparasitic properties, and cats regularly consume grasses if given the opportunity, which might in part be related to innate ‘self-medication’. Even so, the negatives of cigarette-eating probably outweigh any potential benefit, and such behavior could then be considered a misfiring of an instinctive programming mechanism.

Anyway, to conclude with a biopoly(verse) offering on the poison sequestration theme:

 Rats can never show emetic display

So what control keeps rat poisons at bay?

Through a sudden ‘Eureka!’

Comes the answer: It’s pica!

They thus sequester their toxins with clay.

This one hinges on what is apparently a non-standard pronunciation of pica as ‘peeker’. Although some sources do give this as a possible alternative, more usually it is rendered as sounding like ‘piker’. While this is not an Earth-shattering issue for most purposes (‘you say tom-may-to, I say tom-mah-to…’) it does tend to ruin a little verse if one’s pronunciation expectations are violated. So, to accommodate the alternative:

When poisoned, a rat may eat clay

(Emesis is never the way)

Perhaps this is like a

Sick human with pica

In keeping bad toxins at bay.

References & Details

(In order of citation, giving some key references where appropriate, but not an exhaustive coverage of the literature).

‘……this little animal suddenly vomited up half a cigarette…..’    This cat had access (now curtailed) during daylight hours to a frontyard and sidewalk where (regrettably) passers-by sometimes leave cigarette butts, and apparently inadvertently drop whole cigarettes on occasion.

‘…..the well-known drug cisplatin… nonetheless notorious for its emetogenic effects…..’    This is graphically described in Siddhartha Mukerjee’s prize-winning cancer book The Emperor of All Maladies (Fourth Estate, 2011) in which it was noted that nursing staff in oncology units nicknamed cisplatin ‘cis-flatten’.

‘…..drugs such as vincristine and bleomycin are very low in inducing this highly unpleasant side-effect [emesis].’    For a review including the of classification of cancer cytotoxic drugs by their emetogenic potential, see Hesketh 2008.

‘….target relevant neural receptors ….’     See again Hesketh 2008, and also Navari 2009.

‘…..more recent evidence suggests cooperating regions of the medulla…..’     See Hornby 2001.

‘…..the Area Postrema…..’     This has also been referred to as the ‘chemoreceptor trigger zone’. See Miller & Leslie 1994; Shinpo et al. 2012.

‘……emetogenic primary signaling originates from intestinal sites.’    See again Hesketh 2008; and Andrews & Horn 2006.

‘……TAS2R bitter receptors are not only expressed in taste buds, but at a number of distinct anatomical sites, including the gut and the brain.’    For a recent general perspective on non-perceptual roles of taste receptors, see Trivedi 2012. For a specific view of TAS2Rs in gut sites, see Rozengurt & Sternini 2007; for brain, see Singh et al. 2011.

‘….the most parsimonious pathway to take.’    The notion of biological modularity is encompassed within an interesting paper of Weiss 2005.

‘…..emesis can be induced by sufficient concentrations of simple salts…..’    These include lithium chloride and copper sulfate; see Percie du Sert et al. 2012.

‘……measurement of emetogenic potential itself, and its variation between species.’    An extensive review of the literature on emetic induction with a variety of agents across a range of species was conducted and analyzed by Percie du Sert et al. 2012. Apart from measurement inconsistencies between species, animal assays for emesis can be distressing, so alternatives are being developed. See Robery et al. 2011 for work in this regard with the none-sentient social ameba Dictyostelium.

‘…..denatonium….’     This name comes from its use in rendering alcohol undrinkable, or ‘denatured’. It has widespread application as an aversant added to moderately toxic materials to discourage consumption, especially from children. As a quaternary substituted nitrogen compound, it is usually produced as benzoate or saccharide salts. See Hansen et al. 1993.

‘…..recent studies showing distinct brain regions activated by the respective types of taste stimuli…..’     See Chen et al. 2011.

‘…..phenybiguanide …… an emetogen….’    See Miller et al. 1994.

‘……cholecystokinin variant (CCK-8) is a highly potent emetic in humans….’    Cholecystokinin occurs a 33-mer peptide, but also as shorter truncated forms which retain activity, including the octamer CCK-8. For detail on the emetic properties of CCK-8 in comparison with other agents, see Percie du  Sert et al. 2012.

‘……olfactory receptors might also be implicated in humans…..’    See Braun et al. 2007.

‘… rats and mice, whose physiology does not permit the emetic reflex….’    For an excellent (and fully referenced) account of this and many related areas (such as pica), see Anne Hanson’s rat behavior site, which also includes a list of known emetic behavior in a wide range of vertebrates.

Conditioned food avoidance in rats…..’     This rat behavior has alternatively been referred to as ‘delayed learning’; also discussed in a previous post concerned with zoopharmacognosy.

‘….a behavior termed pica.’    The extent of pica in rats has been shown to correlate with the degree of emetogenicity of anticancer drugs in humans (Yamamoto et al. 2007).  De Jonghe et al. 2009 have also provided evidence that consumption of kaolin (a type of clay) by rats can assist recovery from doses of the anticancer cytotoxic drug cisplatin. Pica has been documented also in emetic animals, and certainly humans are included in this regard. While human consumption of clays or related materials is mostly an abnormal behavior, in certain circumstances it has been proposed to have positive effects associated with correction of micronutrient deficiencies. The increased incidence of pica in pregnant women has been long noted, and this is possibly associated with benefits from protection against toxins. (See Young 2010). It is interesting to compare this with apparent zoopharmacognosy in pregnant lemurs through the consumption of tannin-rich plant materials (noted in a previous post).

Conditioned food avoidance in rats has been associated with the chemosensing in the Area Postrema……’     See Ossenkopp & Eckel 1995; Eckel & Ossenkopp 1996.

‘…..denatonium, exquisitely potent as a bitterant as measured by human sensing, is markedly less so in rats….’     Some results seem to indicate that denatonium salts may be no more bitter to rats than is quinine. See Kaukeinen & Buckle 1992.

‘…..complete avoidance of absolutely all bitter substances would have negative nutritional consequences.’    See commentary of Calloway 2012.

‘……genetic differences in taste receptor repertoires……’     For more on genetic differences in human taste perception, see the previous post. Evidence for positive selection during human evolution of certain bitter taste receptor alleles has been demonstrated; see Soranzo et al. 2005; Li et al. 2011.

‘…..there is evidence that leaves of the tobacco plant have certain antiparasitic properties.’    See Iqbal et al. 2006.

Next Post: September.

Xenobiotic Recognition Repertoires

May 30, 2012

This post is essentially the fourth part in continuation of the series ‘Subtle Environmental Poisons and Disease’, but in particular, it extends from the previous post dealing with xenorecognition, or the ability of organisms to recognize and contend with toxic chemicals ingested from the environment. Here we’ll focus on the range of xenobiotics which can be recognized by any of the different systems considered in the last post, which amounts to the biological recognition repertoire towards such chemicals. Is it complete, or can some chemical agents ‘fly under the radar’ and escape detection?

Recognizing xenobiotics

Failure of an organism’s defenses to recognize an incoming foreign compound would imply that its recognition range (or repertoire) was incomplete, such that its ability to ‘see’ certain molecules had one or more ‘holes’. While this is a logical proposition, it should be recalled that there are different levels of xenorecognition, including taste receptors, internal xenosensors, xenoprocessing enzymes, and xeno-exporters (considered in the previous post, see the relevant Figure . So, given that each level uses a different set of receptors, failure of recognition at one level has no necessary bearing on the potential recognition at other levels. The caveat ‘potential’ is used because in any linked functional chain, a breakdown at one stage will compromise later stages. (If an activation series A → B → C → D is absolutely dependent on the sequential input of each member, than obviously a ‘knock-out’ of A, B, or C will prevent the activation of D regardless of its intact state. D would then fail to be triggered unless alternative pathways for its activation existed). Thus, failure to activate a xenosensor may prevent effective upregulation of expression of the appropriate xenoprocessing enzymes (see the relevant Figure from the previous post), even if the latter are well-equipped to deal with the toxic threat. A hole in a repertoire in an ‘upstream’ defense level might therefore cause ineffective responses to a xenobiotic, even if the ‘downstream’ recognition repertoires are perfectly adequate.

On the other hand, some lines of defense might seem decoupled from others. At the frontline of molecular sensing, bitter taste receptors essentially warn ‘don’t eat this!’. Yet if a dangerous substance is eaten anyway, through either misadventure or failure to receive a bitter signal, then surely the next lines of defense would be independent of the breakdown in the first strategy of avoidance. True enough, given the apparent independent nature of taste perception relative to other xenosensing mechanisms, but an interesting wrinkle on this has emerged from observations that the T2R taste receptors (which transmit bitter signals) are also expressed in specific gut cells or airway smooth muscle cells. Obviously this does not involve direct sensory transmission, since we don’t experience taste signals through our intestines, despite many people often having a ‘gut feeling’ about all sorts of important matters. So what do these gut taste receptors do? Although much more work is required, recent results have suggested that they may have a role in limiting the gut-mediated absorption of potentially toxic molecules (defined as ‘bitter’ through their interaction with these receptors). If this is correct, taste receptors may have more than one role in limiting the intake of potentially noxious compounds.

In the context of poisons, it is possible to think of recognition in an inverted sense, since obviously any toxic substance must itself ‘recognize’ at least one type of physiological target, in order to exert any kind of toxic effect in the first place. This viewpoint strains the meaning of molecular recognition beyond its usual ‘recognition’, since at face value it would have to be inclusive of simple chemical reactivity between (for example) a toxic aldehyde group and many different proteins and other biomolecules. Yet it might be useful in passing simply as a backdrop for posing a hypothetical situation where a toxic substance ‘recognizes’ certain target molecules of an organism, but the organism’s defenses are completely blind to it, at all levels of xenorecognition. And taking this further still, what of molecules that do no harm at all, while likewise escaping recognition? Such ‘invisibility’ will be looked at a little more below.

Holes for the Individual, Holes for the Species

A second important issue with respect to holes in any biological receptor repertoire concerns individual variation versus the general repertoire for the species as a whole. Let’s look at this question once again with the first level of defense against xenobiotics, the taste receptors.

For over 80 years, it has been known that genetic differences in humans determine the taste response to certain defined simple chemical substances. For example, a substantial human fraction cannot experience the intensely bitter taste of the compound phenylthiourea (also known as phenylthiocarbamide, or PTC) reported by the remainder.  Over the last two decades, much has been learned about taste receptors, and the specific T2R receptor responsible for signaling PTC bitterness has been identified. Seven different alleles of this receptor have been identified, including the non-taster and major taster forms (the latter two being the only alleles occurring with substantial frequency outside sub-Saharan Africa). Interestingly, genetic evidence suggests that the non-taster allele has an ancient provenance, and this persistence has led to the proposal that it may have a selective benefit preserving it within the gene pool. This could have occurred if the non-taster receptor allele lost recognition for PTC but actually gained the ability to recognize and signal bitterness for some other (as yet unknown) naturally occurring compound. If both the taster and non-taster PTC alleles then provided fitness benefits under certain circumstances, both alleles would be preserved by ‘balancing’ natural selection.

Under such circumstances, the collective genotype of a species will be a mosaic of alternative alleles for sensing xenobiotics by taste. But in general, loss of sensory receptors can be a fitness gain if the sensory input no longer exists, or is no longer in any way beneficial, for the species. The classic example in this regard is the loss of sight (and eventually loss of complete eyes) in cave animals which live out their entire life-cycles in darkness. An interesting case in point with respect to chemical sensing is the loss of functional sweet taste receptors in domestic cats, which as obligate carnivores evidently have no need at all to experience sweetness or be attracted to sweet substances. Recently, this observation has been extended to a range of other ‘complete’ carnivores.  It is a well-understood evolutionary principle that unnecessary genetic function will tend to be lost, since individuals lacking such gene expression will gain a slight fitness advantage. This may well be at work in the evolution of ‘unsweet’ (though definitely not unsavory) carnivores, but it is possible that other factors which positively select for sweet taste loss also operate in these circumstances.

Yet where a single receptor has a degree of promiscuous ligand recognition, as with the bitter taste receptors, total ablation may always incur a fitness loss. (In a changing environment, some dangerous compounds recognized by such a receptor may no longer be encountered, but other compounds within the receptor’s individual recognition range may still be present). But a functional mutation in a receptor (rather than complete inactivation) might merely alter its specificity range, and could involve both losses and gains, as noted for the PTC story.

So in principle any xenosensory receptor could, through inactivating mutation, give rise to a specific repertoire reduction in an individual. This will constitute a fitness loss, and will be eliminated from naturally breeding populations even if the reduction in fitness is quantitatively very small. Selection in favor of loss (as with sweet taste in carnivores) is unlikely to ever occur with xenosensory receptors in general (including bitter taste receptors) for the reason of recognition promiscuity, but selection maintaining variation in individual receptor repertoires (as with PTC perception) is probably present. It should not be surprising that here we exclude sweet taste reception from xenosensing, since after all, the main targets of sweet perception are simple sugars (in food sources) which are certainly not foreign to any living biosystems. Yet the sweet taste receptor can definitely be triggered by completely non-natural compounds (saccharin, aspartame, and many others) and some intensely sweet natural proteins. This might be framed as ‘xenorecognition’ of a sort, but that is not the primary issue. It is the neurological end-point, the sensory perception at the end of the initial taste receptor triggering, which distinguishes a useful taste-mediated xenoreceptor. Sweet substances (naturally, in primate diets, mainly sugars in fruits) trigger a pleasurable response (‘good – eat me!’), while intensely bitter substances produce an aversive reaction (‘bad – don’t eat!) In fact, if a natural toxic substance elicited a sweet response, an animal might be stimulated to consume more of it, to its great detriment. And that of course would be completely contrary to everything that an effective xeno-response system should provide. Clearly, natural selection would rapidly change sweet taste receptors which acted in this way towards compounds in an animal’s normal environment, but no such selective pressures exist for substances which are never likely to be naturally encountered. An example of such an ‘unnatural’ toxic but sweet substance is ethylene glycol, widely used as an antifreeze. Poisonings of dogs and young children have been attributed to its sweetness, although hard evidence for this seems to be lacking. It is indisputable, though, that ethylene glycol is very toxic (through its metabolic products) and elicits a sweet taste. At very least, the perception of ethylene glycol sweetness would presumably not deter an animal with functioning sweet taste receptors from imbibing it, in the same way that a strongly bitter substance would.

While ‘holes’ in the xenobiotic recognition repertoire of a species as a whole could in principle occur at any level of xenosensing and processing (as noted above; see a Figure from the previous post), deficits in taste warning signals are relatively easy to define. So let’s consider an example of a general deficit of this kind towards an interesting group of highly toxic compounds.

Xeno-myopia to xeno-blindness?

Certain tropical marine fish can be source of a potent group of toxic compounds which upon consumption cause a condition known as ciguatera. The toxic principle involved, ciguatoxin, is a complex polyether chemically related to a number of other known marine poisons, including brevetoxin, palytoxin, and maitotoxin. (The latter is of interest as the largest natural bioproduct known, with a molecular weight of 3425 Daltons). Ciguatoxin itself exists as several chemical variants based on a common polyether skeleton, of molecular weights around 1000 –1100 Daltons. Polyether toxins are accumulated in fish through the food chain, with the original source identified as certain species of the marine eukaryotic single-celled protists known as dinoflagellates. (Although the ultimate synthetic machinery for synthesis of these large and complex molecules may come from symbiotic bacteria associated with specific dinoflagellate species).

Structure of a representative ciguatoxin, ciguatoxin-1. Letters A-M correspond to the nomenclature convention for each cyclic ether ring.


Unlike a great variety of plant-derived toxic alkaloids and other noxious molecular agents, ciguatoxin is tasteless, and thus fails to bind and activate any of the bitter taste receptors. But of course, failure to trigger the first line of defense has no bearing on what a molecule may do once ingested. The very high toxicity of ciguatoxin obviously demonstrates that it must very significantly interact with at least one physiological target. (In fact, it is neurotoxic, perturbing the activity of voltage-gated sodium and potassium channels which regulate nerve electrochemical transmission). While bypassing the frontline of taste, how is ciguatoxin ‘seen’ by the remainder of the xenosensory system? The metabolism of this compound (and related molecules) appears slow in experimental animals, with much ciguatoxin excreted in an unmodified state. Symptoms of ciguatera toxicity in humans can persist for months or even years following exposure, consistent with slow metabolic turn-over. On the other hand, evidence has been produced indicating that exposure of mice to ciguatoxin is associated with transcriptional activation of Phase I and II xenobiotic responses (phases of the latter responses were considered in the previous post).

In combination, these data would suggest that while ciguatoxin (and in turn other polyether marine toxins) can trigger xenobiotic sensors after its ingestion, its processing and removal from the body is not highly efficient. Certainly its lipid solubility may delay its removal, but that alone would not account for a very low level of metabolic processing. Given the focus of this post on xenorecognition repertoires, what is the limiting case of poor recognition of a toxic agent? In other words, if failure to taste ciguatoxin and its ensuing poor metabolism is ‘xeno-myopia’, is there any precedent for ‘xeno-blindness’, where a toxic agent creates havoc without any recognition or metabolic processing? Or would this be virtually a contradiction in terms? Given that xenorecognition operates by means of a specific set of receptors of limited number (albeit with considerable promiscuity) and a vast number of potential targets for a toxin exist in vivo, it might not seem an impossible prospect. Yet there seems to be no precedent for this. It is likely that certain compounds are indeed poor substrates for all metabolic processing enzymes (and thus slowly metabolized), but ‘poor’ is not at all the same as ‘invisible’. It may be the case that virtually all small molecules offer a weak binding site fit for the promiscuous pockets of at least some xenosensors and processing enzymes, allowing a slow level of  metabolic turnover. Alternatively, ‘non-specific’ attack by reactive oxygen species mught be a factor, noted again below.

In a xenobiotic context, the biological rationale for promiscuous recognition in the first place is to ensure that a limited number of receptors can cater for recognition of a much larger range of potential targets. But as with any biological issue. this question must also be considered from the perspective of evolutionary selective pressures. Evolutionarily speaking, the human species would have had little if any exposure to ciguatoxin until relatively very recent times, and even now, its impact is restricted to specific geographical areas. A maritime fish-eating species in tropical areas which was regularly threatened by ciguatera poisoning would be under a strong selective pressure to evolve a better xenorecognition system towards polyether toxins, including primary aversive taste sensitivity. Alternatively, evolution of means for very efficiently detoxifying or internally sequestering polyether toxins would allow otherwise contaminated marine foods to still serve as useful nutrient sources. (It is possible that some tropical fish have the latter kind of protection, since they can accumulate high levels of ciguatoxin without apparent ill-effects). Sometimes a small change in the amino acid sequence of a target molecule for a poison can make a very large difference in an agent’s toxicity. For example, consider the action of the insecticide DDT, which (in common with many of the polyether marine toxins) targets the neural voltage-gated sodium channel. It appears that only three key amino acid residue differences in the human vs, insect sodium channel determine the differential toxicity of DDT to insects. Selective pressures from environmental toxins could thus drive sequence changes in targets such as this voltage-gated channel, such that function is preserved but susceptibility to the toxin is diminished.

Xenosensing vs. adaptive immunity

While thinking about evolutionary selective pressures, it’s interesting to compare recognition of xenobiotics with the adaptive immune system. The latter, of course, exists to deal with a gamut of pathogens which otherwise would take over a host and replicate freely at the host’s expense. Internal surveillance against transformed cells (‘altered self’) to prevent tumor formation is another role for this advanced recognition system.

It is easy to conceive of ‘adaptive xenosensing’, where a novel (and poorly recognized) toxic environmental compound induces selective processes from populations of variant receptors on xenosensory cells, such that variants with greater affinity are selected and amplified. The power of this Darwinian process in action has been shown by the successful artificial generation of antibodies to ciguatoxin itself. This would not occur under natural circumstances, since it requires artificial conjugation of fragments of ciguatoxin to large protein carrier molecules, such that the toxin fragments act as immunological haptens. Nevertheless, this demonstrates that the adaptive immune system can indeed select for antibodies with the correct binding specificity against a toxic polyether molecule.

Why then does this not occur with xenosensing, to overcome poor initial responses to novel xenobiotics? (Here we return to this question as initially noted in the previous post). Once again, we must look to evolutionary explanations. Evidently the existing xenorecognition systems of vertebrates is selectively ‘good enough’ despite theoretical room for improvement, where the latter would require extensive investments in new developmental pathways with their consequent energetic demands. Above all, even the most poorly-metabolized compounds do not replicate, and (provided they are present in sublethal amounts) are gradually removed from organisms. Pathogenic and invasive organisms, on the other hand, will indeed replicate, and present an acute problem demanding adaptive solutions. And this is what evolution has bequeathed us: A xenorecognition system which is static in the lifetime of an individual, but variable through selective pressures over evolutionary time; and an immune system which is dynamically adaptive in time-frames much shorter than an individual life-span.

Bioorthogonality and Xenobiotics

We have considered ‘xeno-blindness’ as a hypothetical situation where a toxic compound elicited no response from an organism which had ingested it. (Such a molecule would ‘recognize’ one or more target molecules anywhere with the bounds of the host’s biosystem (and thereby manifest toxicity), but the foreign compound would fail to be recognized by any of the host’s xenodefenses, at any level). What if non-recognition is taken a step further still, such that the xenobiotic is neither toxic nor recognized? In such circumstances, we would be reminded of the notion of orthogonality, as raised in a previous post with respect to ‘weird life’. Our hypothetical compound which is completely ‘invisible’ (neither toxic nor xeno-recognized) would thus be considered bioorthogonal. Toxicity, of course, is the reason many compounds come to the attention of science in the first place. If the polyether metabolites of dinoflagellates were completely non-toxic, they would likely have escaped detection, given their low absolute amounts present in most marine samples. (Of course, they would still not be chemically ‘invisible’, and would eventually be picked up by modern sensitive metabolomic profiling – but this would be much delayed relative to the ‘flagging’ of their presence through their toxic actions).

A first thing to note in this regard is that bioorthogonality can be a relative concept. Consider that a compound could be ‘invisible’ in a specific cell type in culture, yet actively metabolized by cytochrome P450 enzymes expressed in liver cells in the whole organism from which the cultured cells were derived. In such circumstances, bioorthogonality might be assigned in the first case, but certainly not the latter. Yet even if bioorthogonality (or something approaching it) exists for an entire mammalian organism, this need not apply to the biosphere as a whole. Bacteria, after all, are the consummate masters of biochemical transformations, and can modify an astonishing range of compounds. Included among these are natural polyether toxins themselves, and a great many non-natural artificial compounds. A good case study of the latter phenomenon is the targeting of paraoxon (a toxic metabolite of the organophosphorus insecticide parathion) by the enzyme bacterial phosphotriesterase. This activity is believed to have evolved only within the last few decades, when paraoxon has become present in the environment, since no natural substrate for this enzyme is known.

It is thus not difficult to see that bioorthogonality can exist in discrete compartments (as in the case of a single cell type in culture noted above), but it is much more problematic to accept that any novel molecule would evade recognition within the entire biosphere. Such a hypothetical molecule could even be seen as a kind of orthogonal ‘dark matter’, but its existence would be very dubious for similar reasons to the possible existence of truly ‘orthogonal life’ on this planet intersecting with conventional life (as noted in a previous post). Certainly new artificial molecules released into the environment (such as DDT and other organochlorine compounds) persist for long periods, but again this is slow processing rather than total non-recognition, given that organisms capable of metabolizing such products are not evenly environmentally distributed. And, as exemplified by the above paraoxon example, bacteria can evolve efficient enzymatic recognition and processing extremely quickly, so any period of supposed ‘orthogonality’ would likely be short in any case.

It might be thought that any molecular entity even approaching the notion of bioorthogonality should exhibit chemical stability and low reactivity. At one level there would be seem to be some value in such a proposition, given the environmental and chemical stability of compounds such as fluorinated hydrocarbons (especially polymers thereof). But at another level, this cannot be correct. Certain heavily fluorinated compounds (including the simple molecule carbon tetrafluoride, CF4, but more commonly derivatives of methyl ethyl ether) have the property of acting as general anesthetics. And even the ultimate in non-reactivity, the inert gases, can induce such anesthesia. The inert (or ‘noble’) gas xenon has often been cited as a near-ideal anesthetic, with only its considerable expense limiting its much more widespread use. (It is a little ironic that the name ‘xenon’ has the same etymological route meaning ‘stranger’ as seen in all the ‘xeno-‘ words in this post). Xenon can in fact form a limited number of chemical compounds with highly reactive partners under specific circumstances, but there is no question of it forming any covalent bonds under physiological conditions.

Although there are vast numbers of artificial and naturally-derived drugs which bind non-covalently to their specific targets (and thereby act as inhibitors or other functional modulators), all of these are subject to some level of recognition by other proteins within the xenosensing system, followed by subsequent xenoprocessing involving covalent modification. This, of course, is the underlying basis of all drug metabolic studies. As we have seen, some xenobiotics are metabolized at a very slow rate. In this post, complex polyethers are the key exemplars, but dioxin (TCDD) is another important case in point, as discussed in the previous post. In neither case, however, can slowness of metabolism be in any way equated with complete invisibility to xenoprocessing mechanisms. Thus, while the mode of action of drugs may very often be via non-covalent interactions, drug processing (the xenorecognition system) involves at least a low level of covalent modification. As noted above, it could be argued in principle that very slow metabolic attack on highly resistant xenobiotics might proceed through the action of reactive oxygen species, whether deriving from cytochrome P450 activity (or other processing enzymes) or more non-specifically. If the latter, the authenticity of the ‘xenorecognition’ might be called into question, if bona fide ligand-receptor interactions (even at a high level of promiscuity) were not involved. Even if this should be the case, the reactive oxygen species nevertheless derive from host metabolism, and so even very slow attack on xenobiotics from this source still would result in a failure of true bioorthogonality.

But normal xenoprocessing (or any non-specific oxidation) cannot be relevant in any way to xenon, since xenon will never undergo any covalent reactions in vivo. And yet xenon surely is far from bioorthogonal, given its dramatic ability to modulate conscious experience in vertebrate organisms. These observations indicate that bioorthogonality on the part of any xenobiotic factors cannot be described simply by a complete lack of covalent reactivity at all biosystem levels. (Note we cannot refer to ‘compounds’ or ‘molecules’ when including monatomic inert gases such as xenon). So while hypothetical bioorthogonality would necessarily involve a lack of reactivity, it would have to be defined as functional reactivity of any kind, whether covalent or non-covalent, and at any physiological level.

There’s an important area relevant to bioorthogonality already alluded to in a previous post , which concerns artificial development of chemical reactants and reaction process which themselves are orthogonal to biological systems in which they take place. But to do justice to it, that will have to wait until a later post.

So, to conclude with one of the subthemes used here:

One should note that ‘xeno’ means stranger

And possibly, terrible danger

A harsh bitter taste

Is no form of waste

It serves as a guardian ranger

References & Details

(In order of citation, giving some key references where appropriate, but not an exhaustive coverage of the literature).

‘…..the observation that certain taste receptors….. are also expressed in specific gut cells…’   See a review by Rozengurt & Sternini 2007.   ‘……recent results have suggested that they may have a role in limiting the gut-mediated absorption of potentially toxic molecules….’    See Jeon et al. 2011.  /   ‘…..or airway smooth muscle cells….’     See Deshpande et al. 2010.

For over 80 years, it has been known that genetic differences in humans determine the taste response….’    The phenomenon of ‘taste-blindness’ to phenylthiourea (phenylthiocarbamide) was first reported in 1931; see a review by Drayna 2005.

‘…..the specific T2R receptor responsible for signaling PTC bitterness has been identified…’    For details of this receptor (TAS2R38), see Bufe et al. 2005.

‘……this persistence [of the non-taster PTC allele] has led to the proposal that it may have a selective benefit preserving it within the gene pool. ‘ See Kim & Drayna 2005.

‘……the loss of functional sweet taste receptors in domestic cats……’   /   ‘ Recently, this observation has been extended to a range of other ‘complete’ carnivores.  It is a well-understood evolutionary principle that unnecessary genetic function will tend to be lost..….it is possible that other factors which positively select for sweet taste loss also operate in these circumstances.’     See Jiang et al. 2012 for details on carnivore loss of sweet taste. In general, an often-noted example of abrogation of unnecessary gene function is loss of the ability to synthesize vitamin C (ascorbate) by primates, owing to their fruit diets containing plentiful supplies of the vitamin.

‘   the sweet taste receptor can be definitely be triggered by …….some intensely sweet natural proteins.’   Proteins triggering the sweet taste receptor bind to a different site to that used by low-molecular saccharide sweet substances.   See De Simone et al. 2006.

‘  Poisonings of dogs and young children have been attributed to its [ethylene glycol’s] sweetness, although hard evidence for this seems to be lacking…’    See studies in dogs by Marshall & Doty 1990; Doty et al. 2006. Whether or not at least some dogs are prompted to consume ethylene glycol through its taste, non-sweet tasting cats and other obligate carnivores would presumably be completely resistant to this effect. (Note that dogs, like bears, are not in fact ‘complete’ carnivores, and can subsist on other foods).

‘…..the original source identified as the marine eukaryotic single-celled protists known as dinoflagellates….’    For some basic background on dinoflagellates, and especially their unusual genomics, see Lin 2011; Wisecaver & Hackett 2011.

‘….ciguatoxin is tasteless….’ See Park 1994Lehane 2000. ‘Tastelessness’ here refers to the highest concentrations of polyether marine toxins found in contaminated fish, which are clearly sufficient to intoxicate a human or other mammal. Thus, even if artificially massive concentrations of ciguatoxin (far in excess to that encountered in contaminated natural sources) stimulated a taste receptor signal, such a response would be clearly far too insensitive to be useful as a primary anti-toxic avoidance screen. So tastelessness here is a functional definition, even if not necessarily absolute.

Another intriguing observation in this respect is that a commonly-reported symptom of ciguatera intoxication is distortion of taste perception (dysgeusia), such as experiencing a metallic taste in the mouth. Recent evidence suggest that this arises from ciguatoxin (and related polyethers) interfering with voltage-gated ion channels in taste receptor cells. These channels are associated with neurotransduction of taste receptor signals, but must be distinguished from the taste receptors themselves (which are members of the very large G Protein-Coupled Receptor family). See Ghiaroni et al. 2005; Ghiaroni et al. 2006.It thus seems ironic that polyether marine toxins fail to effectively activate taste receptors in the first place, yet perturb their function once intoxication has occurred.

‘…..ciguatera …….is neurotoxic, perturbing the activity of voltage-gated sodium and potassium channels….’  See a review by Wang & Wang 2003; and a more recent sudy by Pérez et al. 2011.

‘…..ciguatera toxicity in humans can persist ……consistent with slow metabolic turn-over…’ See Lehane 2000; Chan & Kwok 2001; Bottein et al. 2011. Note that (without further information) this is by no means proof of actual persistence of the original toxic molecule, given the formal possibility of ‘hit-and-run’ ongoing pathological effects, as noted for the neurotoxic chemical MPTP in a previous post.

‘….exposure of mice to ciguatoxin is associated with transcriptional activation of Phase I and II xenobiotic responses….’ See Morey et al. 2008.

A maritime fish-eating species in tropical areas which was regularly threatened by ciguatera poisoning would be under a strong selective pressure to evolve a better xenorecognition system….’    This specifically refers to land-dwelling or semi-aquatic animals rather than those which are fully marine. ‘Red tides’ of dinoflagellate blooms are often associated with massive fish kills, but in such cases it appears to be from release of toxins directly into local marine environments. Where this applies, improved xenorecognition could not promote avoidance. Even if protective mechanisms have evolved in an animal towards a toxin, massive transient exposures may still have lethal consequences.

‘….. possible that some tropical fish have the latter kind of protection [detoxifying or internally sequestering polyether toxins]….’   In this regard, it is interesting to note that a natural inhibitor of the toxic effects of at least one polyether marine product (bevetoxin) has been isolated, albeit in this case from dinoflagellates themselves. (Production of the inhibitor as well as the toxin in varying proportions by dinoflagellates may contribute to the variable magnitudes of fish kills during ‘red tides’). See Bourdelais et al. 2004.

‘….only a three amino acid residue difference in the human vs, insect sodium channel is the determinant of the differential toxicity of DDT….’    See O’Reilly et al. 2006.

‘…..successful artificial generation of antibodies to ciguatoxin….’    See Tsumoto et al. 2008; Ui et al. 2008.

‘ Why then does this [development of adaptive recognition systems] not occur with xenosensing, to overcome poor initial responses to novel xenobiotics….’    A similar scenario was raised in Searching for Molecular Solutions (Ch. 2, Molecular Sensing / Multirecognition. ) with respect to chemical sensing of odorants.

‘ Pathogenic and invasive organisms, on the other hand, will indeed replicate, and present an acute problem demanding adaptive solutions. ‘    A seeming paradox in this regard is the lack of adaptive immune systems in invertebrates, which are certainly just as prone to microbial assaults. One answer may lie in their possession of highly diverse innate immune receptors, and this is a topic for a later post.

Bacteria, after all, are the consummate masters of biochemical transformations …..Included among these are polyether toxins……’ See Shetty et al. 2010.

‘…..phosphotriesterase. This activity is believed to have evolved only within the last few decades….’    See Raushel & Holden 2000; Aharoni et al. 2005.

‘…..certain heavily fluorinated compounds …… have the property of acting as anesthetics….’    See Smith et al. 1981; Koblin et al. 1999.

‘…..the inert gases, can induce such anesthesia…..’    Xenon, krypton, and argon have anesthetic properties, but xenon is the most useful in having such effects under normal conditions of pressure. See Kennedy et al. 1992. Although the mechanism of inert gas anesthesia is uncertain (as are mechanisms of anesthesia in general), xenon has long been known to be capable of binding to hydrophobic pockets in proteins (See Prangé et al. 1998), which might be associated in some way with its anesthetic activity.

Xenon can in fact form a limited number of chemical compounds with highly reactive partners under specific circumstances….’    The first xenon compound (xenon hexafluoroplatinate; also the first compound of any of the noble gases) was prepared by Neil Bartlett in 1962. For a review of this and progress in inert gas chemistry in general, See R. B. Gerber’s very useful article from the Israeli Chemical Society site.

Next post: July.

Xenorecognition and its Influences

March 28, 2012

This post is the third in a series (Subtle Environmental Poisons and Disease) dealing with environmental toxic influences, particularly those with long-term ‘subtle’ action. The major subtheme here is the role of individual variation in determining the outcome of a toxic challenge, with particular emphasis on how (in some cases) an organism’s anti-toxic protective mechanism may actually be a source of problems. An implicit requirement underlying both of these topics is the existence of specialized systems for recognizing potentially dangerous non-self molecules from the environment. These themes accordingly center around xenorecognition, or the ability to recognize foreign chemical intrusion at the molecular level. Framing the matter in this manner may bring to mind the immune system, and indeed an analogy can be made between responses to chemical intrusion and innate immune systems tuned by evolution for signaling responses to the presence of dangerous pathogenic organisms. Although such parallels should not be overstated, both systems serve to maintain homeostasis for complex multi-cellular organisms.

 Contending with a Sea of Potential Poisons

The exquisite complexity of living biosystems dictates their sensitivity to a variety of negative perturbations, which can range across a spectrum of extraneous physical, chemical and biological influences. Parasitic replicating systems are likely to have been a serious problem even at the earliest stages of molecular evolution, and defenses against them likewise must have evolved at equally early times. It is precisely the ability of invasive biosystems to replicate at a host organism’s expense which renders such parasites a serious threat. When replication per se is combined with the frequent ability of biological invaders to rapidly evolve (and alter the sets of nucleic acids and proteins by which they may be recognized), a potent selective force is generated for the evolutionary derivation of increasingly complex counter-attacks. These we refer to as immune systems.

Yet there are a great many environmental potential threats which do not directly replicate, and these may originate from both biological or non-living sources. For the latter, we could think of toxic levels of metal ions or other soluble inorganic natural chemicals (such as dangerous oxygen radicals), or natural sources of dangerous gases (such as from volcanic effluxes). Across the field of biology in general, there is a huge range of natural poisons enzymatically synthesized from bacterial, fungal, plant or animal sources. As enzyme systems evolve, so too their range of natural products will change. Given these factors, the sheer numbers of potentially toxic biocompounds will vary greatly between different environments, and the specific nature of such molecules in any setting will inevitably alter over time.

What is needed in order to deal with this? One might consider a system where each potential threat was countered and nullified by a specific recognition molecule, but this proposal soon looks quite impractical if a very large number of potential molecular agents are possible. Also, as just noted, any such agents are not fixed immutably – and even a small chemical alteration might stymie effective recognition by a specifically-directed receptor. Immune systems facing challenges from infectious biological replicators have used a variety of strategies, culminating with adaptive immunity where complex mechanisms are used to generate receptors which are indeed ‘tailored’ to a new and novel threat. This level of sophistication has never evolved for dealing with non-replicating chemical poisons, an issue to be revisited in a subsequent post.

How then is defense against noxious chemicals obtained? While there is no comparable specificity to that seen with antibodies generated by adaptive immune systems, multiple lines of defense have evolved to counter specific poisonous threats. Dangerous levels of certain metal ions, for example, can be countered through the actions of proteins called metallothioneins, which bind and sequester such metals and thereby ameliorate their toxic effects. Strongly oxidizing chemical groups (whether generated through normal metabolic activities or acquired from the environment) are routinely mopped up by a variety of endogenous antioxidants, among which the versatile metallothioneins are included. But of particular interest in this context is the huge diversity of foreign organic small molecules which might potentially impact upon any organism’s normal biological operation – how can these be effectively neutralized?

Before looking at this question, it would be useful to consider a little semantics revolving around what will become a key word here: ‘xenobiotic’. This word, literally ‘stranger to life’ is often used in two distinct, though overlapping, senses. Firstly, it can refer to any molecule which is foreign to the physiological functioning of an organism in question. In other words, in this sense a ‘xenobiotic’ denotes any molecular entity which is not synthesized by the organism itself, or which is not normally used by it as a food, nutrient cofactor, or for any other function. As such, it covers the whole gamut of natural products deriving from the collective biosphere which are foreign to the normal make-up or functioning of a specific organism. Clearly, though, this definition would also encompass all artificial molecular entities, all molecules whose origins derive entirely from human ingenuity. And here is the second sense of ‘xenobiotic’ arises, since in many cases this word is used to refer (more or less) exclusively to artificial compounds, especially where they have become environmental contaminants. Although this framing of the word is more restrictive, it would seem actually closer to its literal meaning as foreign to life in general, thus indicating something truly new under the sun. Nevertheless, for the present purposes it will be used in the first sense, which embraces natural ‘foreignness’, as well as artificial sources of molecular ‘non-self’. After all, no-one suggests that defenses against foreign chemical agents evolved to deal with the advance possibility of non-natural compounds emerging in the environment!

Levels of xeno-defense

Since it is impossible for an organism to avoid taking in physical materials from its environment, the potential for exposure to noxious chemicals will always exist. But equally obviously, the risks from ingesting nutrients are not evenly distributed across the environment as a whole, and avoiding foci of possible danger is of clear value. This is simply in accord with the old dictum, ‘prevention is better than cure’, although of course blindly applied by favorable evolutionary selective pressures.  For mobile animals, chemosensory perception has an important role in screening out noxious nutrient sources. Potentially dangerous decaying foods can warn via their odors, and many food sources (especially plant-derived) bearing toxic secondary metabolites signal the potential threat through strongly bitter and aversive tastes.  Since most toxic plant alkaloids are not volatile, taste aversion is likely to be the most important means of primary screening of potentially noxious environmental compounds.

This then returns us to the above general question regarding how an organism can handle the multiplicity and diversity of potential molecular threats, by asking how the front-line taste-based screening can work. It is now known that the perception of bitterness is mediated by the ‘Type 2’ taste receptors (TAS2R), encoded by about 30 distinct genes in the human genome. Obviously this is massively insufficient to cover the scope of potentially noxious compounds, if each receptor was specific to a given target structure. While much more information is needed, it appears that while different TAS2R receptors respond to different bitter tastants, the receptors as a whole are not dedicated to unique structures. A key descriptive word in this context which will apply at other stages of xenorecognition is ‘promiscuity’, or relaxed discrimination between different molecular structures serving as recognition targets. Presumably, the promiscuity shown by the TAS2R receptors is sufficient for perception of a wide enough range of structures to be biologically useful as a front-line gating of potential poisons. (Each receptor is likely to have its own pattern of structural recognition, such that collectively the receptors cover a sufficiently adequate area of chemical space).

Clearly, though, other lines of defense against noxious molecules will be needed. While obviously biologically useful, gating against primary ingestion of poisons could not provide any guarantees. Toxic products might fail to register bitterness, be so potent as to be still dangerous when below the threshold of taste awareness, or be masked by other tastants present in the entire ingested material. Or poisons might be inadvertently taken in by non-oral routes, thereby circumventing anything that TAS2R signaling could achieve.

The conventional view of the processing of ingested drugs (meaning essentially the same as natural or artificial xenobiotics in this context) is divided into three metabolic phases, involving various Xenobiotic Metabolizing Enzymes (XMEs) and other factors. In Phase I metabolism, xenobiotics are acted on by enzymes (particularly those of the cytochrome P450 family) which incorporate or expose chemical functional groups, by redox or hydrolytic reactions. In Phase II, the initial processing facilitates the transfer of natural biological groups onto the xenobiotic to form various conjugates. Phase III reactions (those occurring post-conjugate formation) can involve further processing by Phase I enzymes, and often are taken to include the export of modified xenobiotics across cell membranes by various efflux systems.

Enzymes modifying ingested xenobiotics must clearly be capable of recognizing their molecular structures, although (as seen with the TAS2Rs above) not necessarily with high specificity. In relatively recent times, it has become apparent that before the onset of the Phase I metabolic processing, the primary recognition event involves key proteins generally termed ‘xenosensors’. Many of these had been previously discovered and defined as part of a nuclear receptor superfamily, but initially termed ‘orphans’ owing to their uncharacterized ligand-binding functions. Some such proteins, however, were later found to bind xenobiotic compounds, an interaction which in turn activates these nuclear receptors as transcription factors regulating the expression of key downstream Phase I-III proteins. (This new knowledge accordingly released these nuclear receptors from their ‘orphan status’). Among these xenosensors, the pregnane X receptor (PXR) and constitutive androstane receptor (CAR) have received much attention, but various other xenosensing nuclear receptors exist.

Another important xenosensor is the aryl hydrocabon receptor (ArHR), which is initially located cytoplasmically in distinction to the above nuclear receptors, although upon binding one of its target ligands, the ArHR is then translocated to the nucleus for regulation of its specific transcriptional targets. The figure A below depicts both primary xenosensing and the above-noted three Phases of xeno-processing:

Figure A. Depiction of cellular recognition for a hydrophobic xenobiotic (able to directly traverse cell membranes). Primary xenosensing, and the three Phases of metabolic processing are depicted, culminating in the export of modified compound. This simplified depiction does not attempt to show subcellular locations of the various metabolic components. The xenotransporters can act on conjugates between modified xenobiotics and ubiquitous factors such as the peptide glutathione, but in some cases xenobiotics may be directly exported. (These alternatives represented by the xenobiotic in red).


And this second figure (B) below includes primary taste perception in the context of xenosensing:

Figure B. Depiction of xenobiotic recognition for protective purposes as a process where the front-line is held by odorant and (particularly) taste receptors.


The above-noted limited promiscuity of the TAS2R bitter taste receptors also applies to both xenosensors and xeno-processing enzymes. In other words, each of these participants in the recognition and processing of xenobiotics can bind a considerable range of substrates, but in that regard each has a distinct set of target substrate preferences.

The manner in which xenobiotics are metabolized is crucial to the outcome of the exposure of an animal to the alien molecule(s). All of the players in xenobiotic responses and handling can vary genetically, and this can be a major influence on outcomes for both the short and long-term.

Genetics and Poisons

In passing, we can note that genetic variation in response to chemical challenges is not limited to organic compounds. In the first post of this ‘subtle poison’ series, the deleterious effects of both heavy metals and mineral fibers was noted. In both of these cases, genetic influences on host responses have been recorded, although more data is needed to fully characterize the relevant genes involved in each area.

The role of individual variation in xenobiotic-metabolizing enzymes, and in turn variation in the way such molecules are processed between different individuals, has become of great interest in recent times. For the pharmaceutical industry and medicine, clearly an ability to accurately define how a drug will behave in a specific individual would be immensely valuable, and much useful information has been gained in specific cases. In particular, studying differences in cytochrome P450 family allelic enzyme activity levels has been a profitable undertaking, with clinical applicability.

But for the present purposes, let’s look at particular aspect of the innate genetically encoded anti-xenobiotic responses, where the response itself is responsible (wholly or in part) for the resulting toxic effects.

An Autoimmune Analogy

Earlier in this post, the response against xenobiotics was contrasted with immune systems evolutionarily selected for countering infectious replicators. A fundamental difference between the vertebrate adaptive immune system and responses to xenobiotic threats is the restriction of the latter to sets of germline genes. In other words, while adaptive immune systems can generate and select novel receptors for countering previously unanticipated pathogens, the xenobiotic ‘immune system’ is expressed from innate sets of genomic coding sequences. In this respect, responses against ingested xenobiotics have more in common with the innate immune systems (of either vertebrates or invertebrates), where gene products recognizing specific microbial ‘danger signals’ are encoded in germline genomes.

The adaptive immune system’s greatest strength, the generation of novel receptors to meet novel threats, is also a potential source of harm through the unwanted generation of self-reactive immune specificities. Even though evolution has developed extremely sophisticated ways of avoiding this, adaptive immune system autoimmunity presents an ongoing clinical burden. It might be thought that any innate defense system would bypass this problem, since any innately encoded proteins or nucleic acids recognizing self should be strongly selected against through evolution. Yet it is now known that certain aspects of innate immunity can indeed help trigger autoimmunity under specific circumstances.

Responses mediated by xenobiotic sensors and processors can also directly mediate deleterious results, in contradistinction to the ‘proper’ physiological roles. Although there is no direct parallel with innate immunity to be made, certainly one can view such inadvertently self-destructive responses as ‘autoimmune’ in a broad analogous sense, if one likewise considers xenobiotic processing as a special kind of innate (and usually protective) immunity in its own right.

Self-activation of xenobiotic deleterious effects

There is more than one pathway by which innate mechanisms can produce deleterious reactions to xenobiotic challenge. There is much precedent for the toxicity of a primary xenobiotic not being manifested until modifications in vivo by Phase I metabolic enzymes are produced.  As a case in point, note that a previous post in this series looked at the generation of a neurological condition recapitulating Parkinson’s disease by the compound MPTP. Here the initial molecule was not the direct villain, but rather an MPTP derivative (MPP+) produced by the action of monoamine oxidase (MAO) enzymes. (A striking confirmation of this in animal studies is the blocking by MAO inhibitors of neuronal destruction otherwise mediated through MPTP administration). Also, a previously noted environmental xenobiotic chemical found in soot and coal tar, benzo[a]pyrene is modified by Cytochrome P450 enzymes to an active epoxide derivative, which directly forms DNA adducts ultimately contributing to its carcinogenicity. In these circumstances the Phase I enzymes therefore actually aid and abet the carcinogenic process.

Detoxification may require a sequence of enzymatic modifications upon an initial xenobiotic exposure. During this process, an elevated toxicity of intermediate derivatives may be ‘acceptable’ if their presence is transient and the overall chain of modifications leads to complete elimination of the initial toxic problem. Genetic variations in the activities of key enzymes which retard the removal of highly toxic intermediates could clearly result in significant problems. A classic exemplar of these processing factors is the metabolism of ethyl alcohol (ethanol). This widely popular compound is initially converted into acetaldehyde by alcohol dehydrogenase (and also the Cytochrome P450 member CYP2E1), then into acetate by aldehyde dehydrogenase, and ultimately into carbon dioxide and water. These aspects of ethanol metabolism are common to all humans, which means that anyone imbibing alcoholic beverages is exposed not only to ethanol itself, but also the same metabolic products. The most potentially dangerous of these is acetaldehyde, a known carcinogen. But since quite benign acetate results from the processing of acetaldehyde itself, the derivation of the latter from ethanol is transitory. But it may be a crucial issue just how transitory that effect should happen to be.

It would seem obvious enough that a variant of aldehyde dehydrogenase (ALDH2) with reduced activity would allow the build-up of acetaldehyde after ethanol intake, and this is indeed the case for a significant fraction of humanity (mainly in East Asia) bearing an allelic variant of this enzyme (ALDH2*2) with very low activity. Blocking removal of acetaldehyde renders the effects of alcohol unpleasant, a feature which can be produced in anyone by means of drugs inhibiting the ALDH2 enzyme. (This has been the basis of one type of treatment for alcoholism). But increased levels of acetaldehyde can also result if the catalytic rate of alcohol dehydrogenase (ADH) itself is higher than the usual baseline, and this is seen with the ADH allele 1C*2. In such circumstances, the elevated rate of acetaldehyde production (relative to its enzymatic removal) increases its transient concentration in comparison to that seen with normal ADH.

By whatever means increased levels of acetaldehyde may be produced, the same trend towards increased carcinogenicity results, and evidence for the role of acetaldehyde in ethanol-induced cancers is well-characterized. Under natural circumstances, alcohol may be ingested in relatively small amounts sporadically (think of fermented fruits), but high-level or prolonged exposure in humans is almost always through voluntary actions. So alcohol could be viewed as having ‘autotoxic’ effects involving both conscious-level decision-making, and also at the molecular level from an individual’s own metabolic processing enzymes. Acetaldehyde toxicity resulting from ethanol intake can also have both immediate effects (sickness, flushing) and more subtle long-term negative consequences (induction of tumors). And (to invoke the analogy with autoimmunity), some individuals are highly sensitive to the effects of acetaldehyde produced from ethanol directly as a result of their genetic backgrounds (as with the ALDH2*2  or ADH 1C*2 alleles).

Pursuing this theme a little further, it’s interesting to consider (as mentioned above in passing) that  acetaldehyde can also be produced from ethanol through the Phase I metabolic enzyme CYP2E1. Normally, though, the contribution of CYP2E1 is small except in the case of heavy habitual drinkers, where the enzyme becomes induced. But alcohol is certainly not the only target for this enzyme, given the promiscuous range of substrate recognition by Phase I metabolic catalysts. It eventuates that CYP2E1 converts the common analgesic drug acetaminophen (paracetamol) into toxic derivatives, and when high CYP2E1 has been induced, serious liver toxicity can result from taking normally innocuous acetaminophen doses. Here, then, is another link between a higher-level behavior (albeit pathologized by alcohol addiction) and a ‘blind’ molecular process, both of which elicit ‘autotoxic’ effects.

In mice, the effects of CYP2E1 can be dramatically documented with gene knockouts. Removal of CYP2E1 activity by genetic ablation greatly reduces acetaminophen toxicity. Toxicity for another one of its substrates, benzene, is similarly removed, whereas normal mice given comparable benzene doses are severely affected.

Xenobiotics and Induced Receptor Activity

Now to consider a different pathway for self-inflicted deleterious effects from xenobiotics. Here the focus will be on the highly toxic compound 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD), very often simply referred to as dioxin (although ‘dioxin’ per se is not chemically specific, and is also used to refer to related compounds.

Structure of TCDD


The combination of very high toxicity, environmental persistence, and generation as an unwanted industrial by-product render TCDD as a special problem. It came into particular prominence following the major contamination and human exposures resulting from the chemical plant accident at Seveso, Italy in 1976. Another boost to the notorious reputation of this compound came in 2004 from its use in the deliberate poisoning of the former president of the Ukraine, Victor Yuschenko, evidently an assassination attempt. (It is a little ironic that he is the second individual living within the former territory of the Soviet Union to be featured in this series of posts as a notable victim of a malicious toxic assault. The other person of note was the Russian Alexander Litvinenko, who succumbed to radioactive polonium, as noted previously).

TCDD exerts its effects through binding to the above-mentioned aryl hydrocarbon receptor (ArHR), resulting in its prolonged activation and pathological expression of ArHR target genes (when the receptor is translocated to the nucleus and acts as a transcription factor). As a xenosensor (depicted in the Figure A above), the ArHR activates expression of various downstream xenobiotic-metabolizing enzymes, but TCDD is a poor target of them, being very slowly metabolized. Combined with its high fat solubility, this compound has very long persistence in humans, and its in vivo presence is thus associated with long-term over-activation of the ArHR.

Animal studies provide strong evidence that binding and activation of the ArHR by TCDD is its primary, if not exclusive, mechanism of toxicity. Mice with the ArHR gene ‘knocked out’ by gene targeting technology show a stunning resistance to TCDD-mediated effects. (Although the ArHR certainly has normal physiological roles, and animals lacking this receptor show certain defects, they can grow to maturity and thereby allow such toxicological studies to be conducted).

To refocus on the ‘autoimmune’ theme of this section, consider that in toxicology, it is commonly stated that ‘the ArHR receptor mediates dioxin toxicity’ (or in words close to that effect), reflecting the inescapable conclusions of the above knockout data and many other studies. Rather than a poison directly damaging the functioning of an organism, in this case the poison only creates havoc by effectively enlisting a host factor to initiate self-harm. Consider also that if the ArHR is a component of a system with an important role in detoxifying xenobiotics, then for TCDD (and other known chlorinated polycyclic compounds) the process is subverted towards a biologically dysfunctional end. As such, the TCDD / ArHR precedent would appear to be a classic exemplar of the analogy of ‘misfired’ xenobiotic responses with autoimmune reactions.

Xenobiotics and Real Autoimmunity

Although TCDD and the ArHR are thus used to exemplify a self-damaging process analogized with autoimmunity, ironically they also provide a very cogent link with autoimmunity which is real in every sense. Evidence suggests that xenobiotics can induce autoimmunity by at least two processes, somewhat reminiscent of the above two pathways for xenobiotic-induced self-damage itself. There is now considerable experimental data supporting the contention that self-proteins which have become modified by reaction with xenobiotic compounds (‘modified self’) can trigger immune reactions which cross-react with normal self-structures, and thereby trigger an autoimmune response. This kind of effect has often been termed ‘molecular mimicry’ elicited by the xenobiotic-derived host neoantigens. Alternatively, modification of host proteins by foreign chemicals may generate self-recognition of otherwise cryptic self-epitopes. Exposure to certain heavy metals (mercury in particular) can also trigger unequivocal autoimmunity in animal models, probably by similar mechanisms.

Theoretically, a second broad means of physiological modulation by xenobiotics which might lead to autoimmunity could be differential effects on specific immune cellular regulatory subsets. Real evidence towards this comes from the TCDD / ArHR system once more. It turns out that a special effector helper T cell subset (TH17) bears the ArHR receptor, and prolonged signaling induced by exposure to TCDD activates these cells and exacerbates the development of autoimmune disease in mouse models. (Knockout mice lacking the ArHR accordingly lack this TCDD-induced susceptibility to autoimmunity).

So it seems indisputable that there are good grounds for proposing real intersections between xenobiotic processing, its perturbation, and autoimmune phenomena. And there we leave it for the time being. A point to consider in the next post is why some xenobiotics trigger actions which result in self-damage rather than clear detoxification.

To finish, here are two biopoly(verse) offerings. The first is made with respect to genetic influences on xenobiotic recognition, while the second refers to self-damaging responses to xenobiotic challenge:

People say, ‘So choose your parents well’

For your genotype surely will tell

How well you survive

And prosper and thrive

In a toxicological hell.


The war against toxic attrition

Is a physiological mission

But within this good fight

There are factors that might

Link self-harm as a point in addition.

References & Details

(In order of citation, giving some key references where appropriate, but not an exhaustive coverage of the literature).

‘…..proteins called metallothioneins….’    These proteins (of which there are several classes) also have roles in the transport and delivery to specific subcellular sites of metal ions required for normal metabolic function. Metallothionein-mediated protection against metal ion toxicity is best characterized in the case of cadmium, but is also implicated in protection against mercury and possibly lead toxicities. For more detail see Klaassen et al. 2009; Sutherland & Stillman 2011; Gonick 2011.

‘……a variety of endogenous antioxidants…..’   These include Vitamins C and E, glutathione, and numerous others. For a review, see Rizzo et al. 2010.

‘…..the perception of bitterness is mediated by the ‘Type 2’ taste receptors….’    See Behrens & Meyerhof 2009.

‘…..different TAS2R receptors respond to different bitter tastants….’     In this respect, see an article about a database of compounds with bitter taste (Wiener et al. 2012), one of whose aims is to promote the understanding of the recognition of target molecules.

Each [bitter taste] receptor is likely to have its own pattern of structural recognition….’     See Meyerhof et al. 2010.

‘….the processing of ingested drugs … divided into three metabolic phases…..’   For aspects of these phases, see Nakata et al. 2006.

‘….. the pregnane X receptor (PXR) and constitutive androstane receptor (CAR)….’     See a review of Tolson & Wang 2010.

‘….other xenosensing nuclear receptors exist.’     These include the peroxisome proliferator-activated receptor (PPAR), the farnesoid X receptor, and hepatocyte nuclear factors (1alpha, 3 and 4alpha). See Dixit et al. 2005; Xu et al. 2005.

‘…..the aryl hydrocabon receptor.…’      For a general background, see Abel & Haarmann-Stemmann 2010.

‘…..the deleterious effects of both heavy metals and mineral fibers was noted. In both of these cases, genetic influences on host responses have been recorded….’      For the heavy metals mercury and lead, a number of genes have been implicated (see Gundacker et al. 2010). In the case of mineral fiber-related diseases (especially mesothelioma caused by asbestos), it was noted in a previous post that cofactors were certainly involved. A genetic predisposition towards mesothelioma resulting from another mineral fiber (erionite) has been identified through family studies in Turkey (Dogan et al. 2006; Below et al. 2011).

‘…..studying differences in cytochrome P450 family allelic enzyme activity levels….’     For useful reviews, see Ingelman-Sundberg & Sim 2010;  Singh et al. 2011.

It might be thought that any innate defense system would bypass this problem [autoimmunity]….’     As an example of this point of view made before contrary evidence emerged, see Medzhitov & Janeway 2000.

‘….certain aspects of innate immunity can indeed help trigger autoimmunity under circumstances.’    Without going into too much detail, this can involve circumstances where specific types of innate recognition are controlled by cellular compartmentalization, and its perturbation in pathological states. A little more is provided in the adjunct ftp site (Extras; Chapter 3, Section A3) for Searching for Molecular Solutions. See also Rai & Wakeland 2011.

‘…….benzo[a]pyrene is modified by Cytochrome P450 enzymes to an active epoxide derivative…….’     See Ling et al. 2004.

‘….and finally into carbon dioxide and water….’      This occurs via the formation of acetyl-CoA and the citric acid cycle, described in any standard biochemistry text.

‘…..acetaldehyde, a known carcinogen….’  /   ‘….ALDH2*2 ….. the ADH allele 1C*2.  /   ‘….the role of acetaldehyde in ethanol-induced cancers is well-characterized.’     For more background information on these topics, see Visapää et al. 2004; Seitz & Stickel 2009.

‘……..drugs inhibiting the ALDH2 enzyme [have] been the basis of one type of treatment for alcoholism….’     The classic drug in this regard is disulfiram, although the merits of its use are still controversial. See (as an example from a large literature) Jorgensen et al. 2011.

‘…..some individuals are highly sensitive to the effects of acetaldehyde produced from ethanol directly as a result of their genetic background (the ALDH2*2 or ADH 1C*2 alleles )…..’     A very interesting recent development is the observation that the ALDH2*2 mutation results in incorrect protein folding, a defect which can be corrected by a low-molecular weight ‘chemical chaperone’. (See Perez-Miller et al, 2010) Thus, in the near future perhaps enforced non-drinkers may become capable of imbibing alcohol by co-use of drugs assisting their endogenous defective aldehyde dehydrogenase enzymes, although it’s possible that not everyone would agree that this is a good thing.

‘…..In mice, the effects of CYP2E1 can be dramatically documented with gene knockouts. ‘    For the CYP2E1 work, see Lee et al. 1996. For the benzene studies, see Valentine et al. 1996.

‘……chemical plant accident at Seveso, Italy in 1976…..’     For more on this, see an online article from Time magazine. The accident was clearly associated with many cases of a skin disorder caused by dioxin (chloracne), but the effect of dioxin exposure on cancer rates in the exposed Seveso population have been controversial. In this regard, see Boffetta et al. 2009. (TCDD is clearly carcinogenic in animal models).

‘…..the deliberate poisoning of the former president of the Ukraine, Victor Yuschenko….’     Despite bearing massive amounts of TCDD, Yuschenko survived, albeit with severe chloracne, with his symptoms slowly improving over several years. His clinical profile has been studied and reported (see Sorg et al. 2009). Unless the intention is to simply cause great pain, discomfort and disfigurement, TCDD would seem a foolish choice for malicious poisoners. Unlike rodents and other mammals, humans are not particularly susceptible to lethal effects from TCDD. Also, its overt clinical manifestation (chloracne), its in vivo persistence, and its ready detection render intoxication with TCDD easily proven.

Mice with the ArHR gene ‘knocked out’ ….. show a stunning resistance to TCDD-mediated effects.’     For a review of such studies, and other xensosensor knock-outs, see Gonzalez et al. 1995.

‘…..the ArHR certainly has normal physiological roles……’     See Abel & Haarmann-Stemmann 2010 for background information on ArHR biology.

‘……self-proteins which have become modified by reaction with xenobiotic compounds …… thereby trigger an autoimmune response.’     Although protein modifications by xenobiotics have been known for over half a century, much research in the past few decades focused on DNA chemical adduct formation, given the obvious link in such cases with mutation and aberrant DNA processing or replication. More recently, it has become clear that protein damage too can have grave pathological consequences, of which autoimmunity is a significant part. The study of xenobiotic-mediated protein adducts has greatly benefited from recent advances in proteomic technology. See Liebler 2008 for a detailed exposition of these matters.

‘…..molecular mimicry’ ….. cryptic self-epitopes…..’     For more information with respect to the evidence suggesting this kind of autoimmune induction, see Mao et al. 2004; and Selmi et al. 2011.

‘…..Exposure to certain heavy metals (mercury in particular) can also trigger unequivocal autoimmunity….’      See a review by Schiraldi & Monestier 2009.

‘…..a special effector helper T cell subset (TH17) bears the ArHR receptor, and prolonged signaling induced by exposure to TCDD …… exacerbates the development of autoimmune disease…..’      For more on this (and other aspects of TCDD effects on immunity) see Veldhoen et al. 2008; Esser et al. 2009.

Next biopolyverse offering to be posted in May, given current commitments.

Subtle Environmental Poisons and Disease – Part 2

January 29, 2012

The theme of the previous post concerned how human diseases could be triggered by environmental compounds with slow and subtle effects, with an emphasis on those which occur naturally. (The interest in natural exemplars of such effects arises from earlier posts on ‘Natural Molecular Space’). A principal theme in this follow-up post will be comparing cancer and cellular degeneration induced by environmental agents.

Subtle Carcinogens and Other Problems

With the exceptions of Polonium-210 and asbestos, the ‘subtle poisons’ considered previously were neurotoxic organic molecules. But organic cancer-causing compounds have been described for a long time. The first description of an association between a specific cancer and an industrial (work-related) activity dates back to the 18th century, when a rare form of scrotal cancer was linked to chimney sweeping. From the time of publication of this finding, it took almost 160 years for science to advance enough such that the active component in soot and coal tars was identified as benzo[a]pyrene, a polycyclic aromatic hydrocarbon. Certain other such polycyclic aromatics are also carcinogenic, and they are thus known collectively as PAHs.

Of course, we now know that a whole zoo of both natural and artificial compounds can induce cancer, with varying degrees of potency. It isn’t the intended scope of this post to review a great number of specific cases here, but among the natural set of known carcinogens, an important group are derived as secondary metabolites of various fungal organisms (metabolic products which are not essential components of fundamental life-support processes). While some secondary metabolites (such as antibiotics) have been extremely beneficial to humans, a ‘dark side’ of such secondary metabolism also exists. Not all toxic fungal products (or mycotoxins) are proven carcinogens, but some most certainly are. Probably the most significant in economic and human disease impact are a group of closely related compounds called aflatoxins, produced by various species of the fungal Aspergillus genus (most notably A. flavus). Aflatoxin B1 is the most potent known natural liver carcinogen, and a major problem as a side-effect of fungal contamination of foodstuffs, such as peanuts.

Sometimes it is the case that carcinogens are not directly found in certain natural food materials, but are actually formed during cooking processes. There is a certain irony here, because on the whole, cooking of many foods is beneficial through the killing of potentially dangerous parasites, especially those harbored in raw meat. And apart from the generally detrimental effects of parasites on health, a number of such organisms are themselves directly linked to the generation of specific cancers. Yet during ordinary cooking of meats, carcinogenic heterocyclic amines can form,  and if charring is involved (as with barbecuing), polycyclic aromatic hydrocarbons can be created. Among the latter is found benzo[a]pyrene, the same compound of chimney sweep fame as noted above. Strictly speaking, carcinogens formed by cooking are not ‘natural’, since they require human intervention for their formation.  Indeed, cooking itself has been considered a useful marker for distinguishing humans from all other organisms, including our primate relatives, and may have even shaped evolutionary pathways leading to modern humans. Still, while carcinogenic compounds resulting from cooking clearly arise from human agency, their formation has always been completely inadvertent, and occurred long before the faintest glimmerings of modern chemistry.

Subtlety of effect, at least as measured by the time between exposures and onset of disease, is practically a by-word for carcinogens, as well as the ‘subtle’ neurotoxic agents considered in the previous post. This is not to say that these two broad areas of pathology cover everything where subtlety rears its head, but they may safely be grouped as the major concerns. Beyond this, one needs to consider other physiological systems which may be damaged or negatively affected slowly and subtly by non-biological environmental agents, but not with tumorigenic outcomes. One case in point is the immune system, and there are precedents for natural compounds with immunosuppressant qualities. In this respective, it should be noted that toxic compounds can have multiple effects, and aflatoxins (for example) have immunosuppressive activity as well as their other noxious manifestations. Reproductive systems can be adversely affected by natural phytoestrogens, as considered in more detail in a previous post .

These other issues aside, cancer and toxic neurological disease can be seen as book-ends in terms of the gross effects leading to divergent pathological results. Let’s consider this statement a little further.

Growth or Degeneration, and a Problem Either Way

A toxic challenge will by definition perturb normal cellular functions. Following such an event, broadly speaking three things can happen. Firstly, an affected cell may, through its endogenous repair system, correct the damage and resume its normal functions. Failing this (in the second alternative) it can die, through a number of alternative mechanisms noted in the legend to the figure below. The best-defined form of directed or ‘programmed’ cell death is the process termed apoptosis. But if death itself should fail and replication continue, chromosomal changes induced in the cell may eventually lead to ‘transformation’, where the normal controls on growth are circumvented and a tumor phenotype acquired, the third possible outcome. Successive genetic changes can accumulate, and transformed cells with invasive properties become amplified through their enhanced growth and survival properties. It is no accident that important genes regulating apoptosis are frequently mutated in cancer cells. If checkpoints on cell growth are removed through blockade of cell death, barriers to transformation may be greatly reduced. Indeed, while most carcinogens are also potent mutagens (inducing genetic mutations in genomic DNA), some are not. The latter have been a long-standing puzzle, but it has been shown that non-mutagenic chemical carcinogens are direct blockers of apoptosis, thereby allowing cells with mutations (normally removed by apoptosis) to persist and proceed down transformation pathways.

As noted in the previous post, recovery from a toxic insult might not necessarily be complete, in the sense that the post-toxic state may be sub-optimal relative to the norm, predisposing the cellular victims to future risk. But leaving such complications and the general area of damage repair aside, the major enduring pathological consequences of a low-level toxic assault revolve around cancer vs. degeneration. These outcomes might seem like diametrically opposed processes, since in one case cells grow wildly without normal constraints, and in the other, they die. While the final end-points are clearly quite divergent, it is interesting that the factors which push cells along these pathways have many regions of overlap. Genetic analyses have shown that many mutations which predispose towards Parkinson’s disease are also associated with certain cancers.

But toxic chemicals can also have dramatically different effects depending on the cellular context in which they act. A specific genotoxic (DNA-damaging) compound found in cycad plants (methylazomethanol) can induce neurological damage and degeneration in mice without tumor formation, whereas a high frequency of tumors are induced in the colon. Major determinants of the outcome of such toxic challenge are the levels of appropriate DNA repair enzymes (the effectiveness of the DNA damage response), and differential effects on cellular signaling pathways. Up- or down-regulation of specific pathways operating in diverse cell lineages can evidently result in outcomes as distinct as death or degeneration. A clear distinction between neurons and most other differentiated cells is their cell division status, where non-dividing and long-lived neurons can be contrasted with lineages with active turnover through cell division. Neurons thus permanently exit from the cell cycle into a ‘post-mitotic’ state for the lifetime of the organism.

Indeed, trying to force a mature neuron towards re-entering the cell cycle (by artificially expressing viral gene products which ‘kick-start’ cycling in other quiescent cells) has been observed to promote cell death. Given this piece of information, a differential response to at least some toxic agents can conceptualized in fairly simple terms: Forcing a quiescent cell which is nonetheless ‘primed’ for mitotic cycling (active division) may lead to carcinogenic transformation; doing the same thing to a mature neuron will kill it. This dichotomy is portrayed in the figure below:

Outcomes of mutational damage through low-level genotoxic exposure for neurons vs. non-neuronal cell lineages. In both cases, repair mechanisms exist, which may be insufficient to deal with the problem. Dividing cells may then be diverted into a programmed cell death pathway (usually apoptosis) and thus removed. In a population of renewable replicating cells, this is unlikely to be a direct problem, and of course eliminates a potentially dangerous altered cell. Yet if the shunt towards apoptosis fails for any reason, the altered cell may continue to proliferate and acquire further mutations, with the ultimate consequence of malignant transformation into a fully cancerous phenotype. For neural cells, beyond a certain damage threshold, death is inevitable, even for stimuli that normally promote mitosis in other cell types. Note here that cell death in general can occur by at least three mechanisms, shown specifically for neurons in this schematic. The process of autophagy (a kind of self-recycling of cellular components) is associated with repair processes, but can also constitute a specific cell death mechanism in some circumstances. Apoptosis is a programmed form of death operating through specific cellular signaling interactions, and autophagy can interface with some of these apoptotic pathways (as shown by arrows).  Necrosis was originally categorized as non-specific and passive cell death brought on by severe physical or chemical insults. While cell death in such a disordered manner is presumably still a possibility, recent evidence has shown that at least one form of non-apoptotic ‘necrotic’ cell death is also a regulated process, which has been duly termed ‘necroptosis’.  In any event,  unlike loss of cells with a high mitotic turnover, death of non-dividing neurons will ultimately have significant functional consequences.


And yet the above figure might, upon further reflection, appear overly simplistic. Two pieces of extra widely-known information are relevant here: neurological tumors, and neurological plasticity. If neurons die upon transformation, what is the source of brain tumors? And what about the considerable publicity which has been given to the previously unsuspected potential for recovery from substantial brain damage, indicative of ‘neuroplasticity’?  In both cases, a simple answer is that neither the ‘dark side’ of neural tumors nor much brighter prospects for neural regeneration derive from irrevocably post-mitotic cells. In both cases, neural stem cells, identified only relatively recently, may be the central players. With respect to tumors, the “neural stem cell hypothesis” proposes their origin as the source of primary brain tumors, as opposed to metastatic tumors which originate elsewhere but migrate to the brain and grow there.

The above figure addresses genotoxic substances, but this does not include other neurotoxic agents such as the compound MPTP, discussed in the previous post owing to its specific role in inducing cell death in the substantia nigral brain region, and thereby leading to induced Parkinson’s disease. As noted previously, MPTP has a distinct toxic mechanism via the enzymatic formation of a specific metabolic product, which is taken up by dopamine-producing neurons. This metabolic derivative then inhibits mitochondrial respiration,  leading to cell death. Studies have found, however, that MPTP also has mutagenic properties  – or at least, once again, one of its metabolic products is the active compound in such assays. Yet even if MPTP indirectly caused unrepairable genomic lesions in target neurons, the above observations suggest that cell death would still be the case anyway, rather than prolonged growth and transformation.

To conclude then with a summary of sorts upon this theme:

It seems cancer and neural decay

Are opposed, in a particular way

For to die or to grow

Is the question, you know

And the source of young Hamlet’s dismay

References & Details

(In order of citation, giving some key references where appropriate, but not an exhaustive coverage of the literature).

‘……earlier posts on ‘Natural Molecular Space….’    See posts from last year of 19th, and 26th of July; the 9th, 16th, and 23rd August; and the 6th September.

‘ ….an association between a specific cancer ….. chimney sweeping    This resulted from observations by Dr. Percival Pott (1714-1788) for scrotal cancer in young chimney sweeps, first published in 1775. See Brown & Thornton 1957 for relevant historical information.

‘….the active component in soots and coal tars was identified as benzo[a]pyrene….’    See Ling et al. 2004 for more information, especially including the structure of benzo[a]pyrene-DNA adducts, by which its carcinogenicity is manifested.

‘…..a whole zoo of both natural and artificial compounds can induce cancer…..’   For a review on the diversity of carcinogens (including not limited to organic compounds), see Yang 2011.

‘….an important set derive from as secondary metabolites of various fungal organisms.’    For more information on secondary metabolites, see the earlier post of 19th July, 2011.

‘……toxic fungal products (or mycotoxins)…..’     For a useful review, See Pitt 2000.

Aflatoxin B1 is the most potent known natural liver carcinogen….’    See a review by Hedayati et al. 2007.

‘…..a number of different parasites are themselves directly linked to the generation of specific cancers….’     If we consider ‘parasites’ in the broadest sense, then there are numerous precedents of viral and bacterially-generated cancers. But in food-related circumstances, ‘parasite’ will most often refer to various worms, some of which are indeed associated with cancer. For example, see Vennervald & Polman 2009 for a review of the status of helminth worms as carcinogenic agents.

‘…..ordinary cooking of meats, carcinogenic heterocyclic amines can form…’    For details, see Nakagama et al. 2005; and Frederiksen 2005.

‘…..if charring is involved……polycyclic aromatic hydrocarbons can be created….’   See Daniel et al. 2011.

‘……cooking ….. may have even shaped evolutionary pathways leading to modern humans…’     The ‘cooking hypothesis’ has lead to a very interesting book by Richard Wrangham, Catching Fire: How Cooking Made Us Human; Basic Books 2009.

‘……there are precedents for natural compounds with immunosuppressant qualities….’     Although in normal circumstances immunosuppression is obviously undesirable, for some medical applications certain natural immunosuupressants have proved a great boon. Such compounds have proven highly useful for suppressing unwanted immune rejections of transplanted organs, and thereby greatly facilitated the efficacy of transplant surgery in general.  These include cyclosporin A and FK506, which form ternary complexes between cellular proteins (cyclophilin and FKBP respectively) and the protein phosphatase calcineurin. See Fox & Heitman 2002 for a review.

‘……while most carcinogens are also potent mutagens…….some are not….’     See Kokel et al. 2006, for presentation of evidence that specific non-genotoxic carcinogens act by suppressing apoptotic cell death.

‘…..many mutations which predispose towards Parkinson’s disease are also associated with certain cancers….’     See Devine et al. 2011.

‘……found in cycad plants…..’     See the previous post for more detail regarding the neurotoxic effects of cycads.

A specific genotoxic (DNA-damaging) compound found in cycad plants (methylazomethanol) induces neurological damage ….’      This refers to an interesting publication of Kisby et al. 2011.

Major determinants of the outcome….’     Also Kisby et al. 2011, and see as well Barzilai 2010.

‘…..cell death…..’    For further information regarding autophagy, see Kaushik & Cuervo 2006.  For a perspective on apoptosis in the light of the recently described necroptosis, see Christofferson & Yuan 2010.

‘……publicity which has been given to the previously unsuspected potential of recovery from substantial brain damage….’     Neuroplasticity has received much popular notice largely owing to the book The Brain That Changes Itself, by Norman Doidge. Viking, 2007.  Note that one aspect of neuroplasticity, the ability of neurons to exhibit ‘regenerative sprouting’ from axons, is not the same as acquiring the ability to undergo full cell division. See Weiloch & Nikolich 2006.

‘…..the “neural stem cell hypothesis” proposes their origin…..’     For more information, see Germano et al. 2010.

‘…..MPTP also has mutagenic properties  – or at least, once again, one of its metabolic products is the active compound in such assays…..’     See Cashman 1987; and Ulanowska et al. 2007.

Next Post: Late March.

Subtle Environmental Poisons and Disease – Part 1

December 5, 2011

The past series of posts have largely been preoccupied with the benefits to be had from ‘natural molecular space’, whether the molecules in question are large, small, or functionally linked together in complex (but useful) entire biosystems.

Obviously, some biomolecules are not merely useless, but may be actively harmful. There are a great many bioproducts which are of both high toxicity and obvious impact, at least to the unfortunate victims of serious or even life-threatening natural poisonings or envenomations.  But toxic effects can be much more subtle, and therefore much less easily noticed. In fact, the insidious slowness of some toxic effects can render the actual molecular culprits very hard to pin down, and inevitably controversy is thus generated. These ‘subtle negative’ environmental influences are the principle theme for this discussion, which will include natural products, but will also heavily feature both artificial compounds and non-biological but ‘natural’ substances. (The quotation marks are used here since it is very often only through human activities that natural materials with potentially harmful effects are processed and brought into contact with sizable numbers of people).

 What Does Subtlety Mean in a Toxic Context?

 When we speak of a subtle toxic effect, what is actually meant? It might result from several factors, or any combination of them, including potency, exposure dose, frequency of exposure over time, and the in vivo persistence of the toxic substance. Any ingested toxic compound must by definition interfere with an important biochemical process, with ensuing negative consequences for the functioning of the organism. A poisonous substance might interact with many different biological molecules, but some of these will be of greater import than others in terms of how the resulting deleterious effects are produced. And the affinity of the poison for such biological targets is a determinant of potency.

Potency and dosage over time are inter-related. To qualify as ‘subtle’, intake of a highly potent compound (one whose toxic threshold is reached with very small amounts) would need to be in exceedingly low quantities, where no immediate effects are apparent. If that was the end to it, then obviously such a low-level exposure  to the toxic agent has no further consequences. But a subtle deleterious effect might exist if the compound had produced some kind of persistent tissue or cellular damage, of a type that was very hard to detect without sophisticated intervention, and that was not at all appreciable by the individual concerned. Then, several possibilities could exist which in the end would result in a manifested disease state. Firstly, if the individual is re-exposed to the same source of the toxin on more than one occasion, the damage might be cumulative and accrete until it becomes of such significance that an overt illness is produced. If the body’s repair systems cannot comprehensively deal with the low-level induced damage, in some cases even long intervals between exposures might still result in noticeable pathology. But even if the repair is effective, regular intake of similar low doses of the toxic material over time might eventually overwhelm the host defenses, again leading to disease.

These scenarios assume repeated exposures, but even a single exposure could potentially have significant consequences. It might be supposed that a single bout of damage, if not fully repaired, might be another negative event in an individual’s ‘wear and tear’ list that increases with ageing. In other words, any such a low-grade but persistent toxic ‘insult’ might become more significant over time, in combination with other problems inevitably occurring through life. But a much more serious possibility also has been proposed, where short-lived exposure to certain chemical agents might actually set up an on-going pathological inflammatory process, even long after the original poison has been removed from the host system. This theme will be looked at in a little more detail in a later post in this series.

At this point, it’s very relevant to consider that there is an important issue relating to the physiological removal of toxic agents,  or (in other words) how long it may be that noxious substances of any description can persist once taken into a host organism. Persistence has clear-cut implications for the ability of a substance to contribute to long-term and subtle deleterious effects. While water-soluble (hydrophilic) compounds are generally metabolized and excreted reasonably quickly, lipid-soluble (hydrophobic) compounds can be taken up by fat reserves and remain there for years, with only a slow diminution with time. A classic example in this regard is the insecticide DDT, whose tendency to persist in adipose (fat) tissue is well-described. Poisons which are themselves toxic elements obviously cannot be further ‘broken down’ chemically, and can persist through their interactions with normal biomolecules. For example, heavy metals such as lead and mercury can bind and inhibit numerous enzymes. Although the resulting complexes between metals and protein molecules may be physiologically degraded, release of the metal component may simply liberate it for another cycle of inhibition. In some cases, a noxious element may be physically or chemically similar to a normal biologically-used element, and replace it in certain biomolecules, with disastrous effects on metabolic activities.  This is case for the toxic elements arsenic (capable of competing with phosphorus) and thallium (capable of competing with potassium).

Another major class of persistent and dangerous substances are certain mineral fibers, most notably asbestos. Poorly biodegraded long fibers (such as some mineral silicates, of which asbestos is a case in point) can persist indefinitely in specific anatomical sites. Although the mechanism is still incompletely understood, this can be associated with the generation of a chronic inflammatory process and ultimately carcinogenesis. The link between asbestos and mesothelioma is well recorded.

If we cast a wide enough net, another class of non-biological poisons must certainly be included: radionuclides, or radioactive isotopic versions of the elements. These can be either radioactive isotopic versions of normal elements of biological significance, or radioisotopes of non-biological elements. All such cases can be of either natural or artificial origins. Many examples of the former group can be cited, but potassium-40 (40K) is a natural radioisotope of interest, since it is contributes the largest portion of the radioactive background in living organisms. As such, it has been proposed as a major source of natural mutation, although experimental results have suggested that its contribution to mutation must indeed be (if anything) a subtle influence. Cases of relevant non-biological radioisotopes are likewise exceedingly numerous. Briefly, consider the example of polonium-210 (210Po), which can occur naturally, or can be generated by artificial nuclear reactions.  This radioisotope is present in tobacco smoke, and it has been implicated as a major factor in the generation of smoking-induced cancer. Polonium-210 has also been in the news in recent years, through its use an exceedingly potent poison in the murder of the ex-Russian agent Alexander Litvinenko in London in 2006. There’s obviously nothing subtle about that, but as with any toxic agent, even polonium-210 can exert low-level effects if ingested in small enough doses. At that lower end of the exposure scale, the effects will vary among different individuals, but may contribute to cancers or other conditions, with an overall shortening of life expectancy.

Individual variation in responses to low-level toxic exposure reflect genetic variation in the metabolic processing of foreign compounds, or how the body reacts to the presence of noxious materials. There is much more to be said on this topic, which will be picked up at a later time within this series of posts. But for the time being, we can note this as one of a number of influences bearing on whether a low-level toxic exposure will have longer-term ‘subtle’ effects, depicted in the figure below:

A depiction of the range of various influences which can determine whether a substance could manifest a slow or insidious ‘subtle’ toxicity.  Note that an implicit issue within ‘Generation of Ongoing Pathology’ is the ability of host systems to repair and contain toxic insults, as opposed to the generation of responses which are ultimately self-damaging.


The influence termed ‘cofactors’ in the above diagram simply refers to any other non-host factor which can interact with a proposed environmental toxic substance to exacerbate its action, or even be essential for the insidious toxic effect to be manifested in the first place. An interesting example is a putative requirement for the presence of simian virus 40 (SV40) for the generation of mesothelioma by asbestos.

For the rest of this post, I’ll move on to some specific examples of effects which have revealed subtlety in several senses of the word. The first case involves an artificial compound which is not strictly speaking an ‘environmental’ effect, since it required self-administration, if inadvertently. However, the experience with this compound has had many ramifications which do impinge on environmental influences, both man-made and natural.

(1) Parkinson’s Disease & Toxic Agents

In the early 1980s a remarkable series of events occurred which had implications across several fields of science and medicine. Although terrible and tragic in many ways, it provided a dramatic example of how a toxin can produce quite specific neurological effects, and had direct implications for the origins of Parkinson’s disease (PD). At that time in California, clinicians were confronted with a series of drug addicts in a state of ‘frozen’ mobility, which had many similarities to severe PD. Subsequent scientific detective work showed that this apparent similarity was more than just superficial. The sporadic condition of human PD is characterized by ongoing degeneration in a region of the brain called the substantia nigra, where destruction of neurons normally producing the crucial neurotransmitter dopamine leads to loss of muscular motor functions, eventually immobilizing the patient. These neurons are also pigmented, through the production of a type of melanin (‘neuromelanin’), an early observation which provided the name of this brain area (‘substantia nigra’ = Latin for ‘black substance’). A compound, L-dihydroxyphenylalanine (L-DOPA, which can access the brain and becomes metabolized to dopamine itself) can greatly alleviate symptoms, especially when first applied. The ‘frozen’ addicts likewise generally showed responsiveness to L-DOPA. By analyzing their common activities, the source of the problem was tracked down to their injection of a street drug preparation, a ‘synthetic heroin’, which in actuality was a botched attempt to make the drug meperidine (pethidine). The preparation that the clandestine chemists had produced contained sizable amounts of a different compound, N-methyl-4-phenyl-1,2,5,6-tetrahydopyridine (MPTP), eventually identified as the toxic culprit by means of animal testing. These studies also showed that MPTP ingestion resulted in specific damage to the substantia nigra, with associated loss of dopamine-producing neurons and the onset of parkinsonian symptoms.

Structures of some relevant molecules for the Parkinson’s / MPTP story. The amino acid phenylalanine is included as the precursor to dopamine, and to show its chemical similarity to L-DOPA. Meperidine is the drug towards which abortive synthetic attempts led to the formation of MPTP. MPP+ is the actively toxic metabolic product derived from MPTP itself.


The striking features of this story were widely reported in the scientific literature, and even found their way into popular fiction quite quickly. Those unmistakably victimized by MPTP had varying fates, ranging from death within a relatively short time, to survival for over a decade. But behind the initial cadre of severely affected patients, the prospect still remains of many more people developing PD from short-term exposure to MPTP (and initially subclinical damage) even decades ago. And this naturally raises one of the major implications of the whole MPTP saga: if a defined toxin can have such amazingly specific effects, could there not be other toxins in the environment with similar properties, which induce the neurodegeneration seen in ‘sporadic’ parkinsonian patients? In the course of these kinds of speculations, it was noted that the very description of this disease was a relative latecomer in 1817. Could the apparent lack of reporting of this disease in earlier times mean that ‘natural’ PD is actually a toxic condition, associated with the beginnings of the industrial revolution and newly introduced environmental pollutants?

Many studies have been conducted in order to evaluate this and related questions. In particular, exposure to certain insecticides has been a long-standing suspect as a potential agent of PD, but despite ‘probable cause’, this has not been firmly nailed down. These kinds of analyses must distinguish between genetic influences and environmental factors. (Many distinct genes are known to affect an individual’s susceptibility to PD, and this will be further considered in a subsequent post in this ‘subtle’ series). Studies with monozygotic (identical) twins illustrate this. In one detailed 1999 investigation, sets of monozygotic twins showed no significant differences in the concordance (common incidence in both twin pairs) of PD compared to non-identical twin pairs, but only (and this a crucial point) if the age of onset for either twin was after 51 years of age. Non-concordance of a disease in twin pairs in a controlled study is highly suggestive of environmental causes at least being contributing factors. Consider that if a disease does have a simple genetic origin, significant concordance would be expected in the (essentially) genetically identical pairs. Most cases of sporadic PD occur later in life, also consistent with (but far from proof of) a slow induction from environmental sources. But where PD does occur at younger ages, genetic influences (rare mutations, possibly in combination with environmental factors) might be postulated, and this is consistent with the higher concordance observed with identical twins with relatively young ages at the onset of PD.  But the only general conclusion typically made at present is that the origin of sporadic PD is complex, with multiple genetic and environmental influences implicated directly or as suspects. And yet there is no question that, at least in certain genetic backgrounds, MPTP alone can induce a pathology with the key characteristics of PD. How does it do this?

 A Stealth Poison At Work

Intensive studies on the mechanism of MPTP toxicity revealed that it was not the direct perpetrator of the neuronal damage. MPTP itself is acted upon by a specific enzyme within the brain, monoamine oxidase (MAO) B, which converts this compound into a positively charged species,  the N-methyl-4-phenyl-pyridinium ion (MPP+, as shown in the above chemical structure figure). Consistent with this observation, inhibitors of MAO enzymes are protective against the effects of MPTP in animal models. MPP+ itself is capable of using the machinery for dopamine transport into neurons (using specific dopamine receptors), and this promotes its accumulation in very specific neuronal sites. It is important to note that this particular uptake mechanism also explains the high selectivity of MPTP (the precursor to MPP+) in its toxic action.  Once taken up by dopamine neurons, MPP+ itself acts as a primary toxic agent towards mitochondria, through its inhibition of Complex I of the mitochondrial respiratory electron transport chain.

With the MPTP story, a series of processes are thus required for the ultimate toxic effect to be manifested: conversion to MPP+, uptake by dopamine neurons, and inhibition of mitochondrial activities. (These are primary factors; other issues such as specific genetic backgrounds certainly contribute to individual susceptibility, as will be discussed further in a subsequent post). So, it has been noted that this conjunction of requirements would (hopefully) render the occurrence of compounds with analogous properties to MPTP quite rare. With this in mind, are there natural precedents for this kind of noxious chemical agent? This raises the second case set to be considered (as noted above): natural toxic substances with ‘subtle’ actions. In many such circumstances, the subtlety is bound up with the difficulty of pinning down the true identity of the pathogenic culprit.

(2) Cycads, Soursops, and other ‘Environmental’ Neurological Diseases

In certain Western Pacific islands, epidemiologists have noted for decades an unusual incidence of a degenerative neurological condition called Amyotrophic Lateral Sclerosis / Parkinsonism-Dementia complex (ALS-PDC). In the language of the Chamorros of Guam, a people living on one of the afflicted island groups, the disease is known as ‘lytico-bodig’. A strong role for genetic influences in the origin of ALS-PDC seemed unlikely, given that it was recorded in diverse ethnic groups in varied Western Pacific locations. For a considerable time, though, a dietary item has been implicated: the consumption of a flour made from the seeds of cycad plants available in the affected locales. This remains unproven and controversial, particularly since a specific compound has not been conclusively identified. Yet the general ‘cycad hypothesis’ has support from a number of linked observations. Cycad flour fed to experimental animals over time induces a neurological condition with features of progressive parkinsonism, with associated damage to the substantia nigra. Also, the incidence of ALS-PDC has been in decline in recent years, and this correlates with changes in diet where the amounts of cycad-derived material have markedly declined. A specific amino acid, β-methylamino-L-alanine (BMAA; not found in normal proteins) has been repeatedly linked with cycad-induced disease, but proof of its role has consistently fallen short of the mark. Another contender is methylazoxymethanol (MAM, a metabolite derivative of the cycad compound cycasin), which has been shown to produce neurological genotoxicity.

Whatever the outcome of these studies, there is no question that raw cycad seeds (from which flour is derived) are quite poisonous, and this has long been known to Western Pacific peoples. But by using extensive washing and soaking procedures, they have ingeniously found a way to exploit this otherwise-useless material as a valuable foodstuff. The great irony implicit in the ‘cycad hypothesis’ is that although they succeeded in eliminating the acute toxicity of the cycad seeds, they could not remove traces of toxic substances which may have been the agents of subtle and insidious neurological damage.

Another potential natural molecular assailant of neurons is also found in an island setting, but in the West Indies.  A high incidence of an ‘atypical’ parkinsonism has been identified on the island of Guadeloupe. (One example of the atypical nature of this condition is its failure to respond to L-DOPA.) This has been linked by epidemiological studies with the consumption of the tropical fruit called soursop, and a specific compound from this fruit (annonacin) has implicated as the probable underlying source of the pathology. Annonacin is an inhibitor of mitochondrial Complex I, and can also induce loss of dopamine neurons in the substantia nigra of experimental animals – findings which cannot help but stimulate recollection of the MPTP story, even if there are many points of divergence.

Finally, it’s interesting to note that both ALS-PDC of the Pacific and the Guadeloupe disease also have pathological features of ‘tauopathies’, or diseases associated with abnormal intercellular distribution of a protein called tau, which is normally found in conjunction with neuronal microtubules (a part of the cytoskeleton). In addition, one aspect of the neuropathy induced by annonacin is abnormal neuronal tau behavior.  But a massively more frequent and consequential tauopathy is Alzheimer’s disease, so these findings raise the fascinating question as to whether environmental toxic agents might contribute to the burgeoning world-wide caseload of Alzheimer’s – and if so, how much, and under what genetic circumstances? The significance of such questions for public health in countries with increasingly ageing populations is obvious.

One point already alluded to above is the notion that a transient ‘hit and run’ exposure to a toxic substance might set up a continuous and actually self-perpetuating cycle of damage. Such a possibility could considerably complicate attempts to identify causative toxic agents. If a single short-live exposure (or transient set of exposures) to an agent can result in disease many years later, it is clear that fingering the original culprit becomes correspondingly more difficult. It remains a possibility that such effects are relevant to the cycad saga at least, but a more detailed consideration of this notion is a topic for a later post in this series.

In the meantime, a biopoly-verse rumination:

 Bring genetics and host factors to view

Where some insidious poisons can brew

To stay and remain?

Or start off a chain

Of damage in an unfortunate few.

References & Details

(In order of citation, giving some key references where appropriate, but not an exhaustive coverage of the literature).

A classic example in this regard is the insecticide DDT……’    (With respect to persistence in fat). See Turusov et al. 2002.

‘….arsenic (capable of replacing phosphorus) and thallium (capable of replacing potassium).’   With respect to arsenic, it is interesting to recall the recent controversy regarding ‘arsenical life’, where arsenic in a specific bacterium was reputedly replacing phosphorus (see a previous post for brief detail on this). Arsenic can compete with phosphorus when it is in the form of arsenate (See Kaur et al. 2011; and also Dani 2011 for a discussion of the biological significance of this). For more details regarding thallium and its competition with potassium, see Hoffman 2003.

‘…..Long fibers …. can persist indefinitely in specific anatomical sites….’   See Churg et al. 1994; Coin et al. 1994.

‘…..the generation of a chronic inflammatory process and ultimately carcinogenesis…..’   (With respect to mineral fibers)…. ‘…..asbestos and mesothelioma…..’     See McDonald 1998; Godleski 2004.

‘….release of the metal component may simply liberate it for another cycle of inhibition.   This can be overcome if a chemical agent (a chelator) is administered which is capable of tightly binding the metal, solubilizing it, and allowing it to be excreted. See Flora & Pachauri 2010; Jang & Hoffman 2011.

‘….potassium-40 (40K) …. has been proposed as a major source of natural mutation, although experimental results suggest that its contribution to mutation must indeed be subtle influence.’   See Gevertz et al. 1985 for more detail and a refutation of the importance of this radioisotope for mutation, at least in bacteria.

‘…..polonium-210 (210Po), …is present in tobacco smoke, and it has been attributed a major role in the generation of smoking-induced cancer….’ See Zagà et al. 2010.

Polonium-210 has been in the news in recent years, through its use an exceedingly potent poison in the murder of the Russian Alexander Litvinenko…..’ Polonium-210 is an α-emitter (Helium-4 nuclei). While these emitted particles are relatively massive and poorly penetrating, they are very dangerous if an α-source has been ingested. Doses as little as 1 μg may be lethal in susceptible individuals, and doses of several hundred μg will be universally fatal. See Scott 2007. For more details on the Litvinenko case, see a BBC timeline article.

‘….polonium-210 can exert low-level effects if ingested in small enough doses.’   See also Scott 2007.

The influence termed ‘cofactors’ ….. example is a putative requirement for the presence of simian virus 40 (SV40) for the generation of mesothelioma by asbestos….’    See Rivera et al. 2008; Qi et al. 2011. Note that SV40 was a contaminant of early Salk polio vaccine preparations (see Vilchez & Butel 2004).

‘….origins of Parkinson’s disease…..’     This disease (the ‘shaking palsy’) was first described in the early 19th century by Dr. James Parkinson (Thomas & Beale 2007), who thus bequeathed his name to it. Although obviously an eponymous title, the “Parkinson” is often now rendered with a lower-case ‘P’.

These neurons are also pigmented…..’    Melanocytes, the cells in the skin which produce the pigment melanin responsible for skin color (along with the related pigment pheomelanin) are derived from the same embryological origins as neurons, the neural crest.

‘….a type of melanin (‘neuromelanin’)….’ Neuromelanin is chemically similar, but not identical to, the black melanocyte pigment, which itself is often termed ‘eumelanin’. See Zecca et al. 2001.

‘…..the source of the problem [Parkinson-like illness] was tracked down…..’   See Langston et al. 1983.

‘….widely reported in the scientific literature….’   For example, see an article in 1984 by Roger Lewin in Science, whose title (‘Trail of Ironies to Parkinson’s Disease’) speaks for itself.

‘…even found their way into popular fiction quite quickly….’    The well-known ‘new wave’ science fiction novel Neuromancer by William Gibson (Ace Science Fiction, 1984) features a particular scene where an individual is deliberately victimized by means of the nasty aspects of MPTP neurotoxicity. Since the book was first published in 1984, this was at the time a very quick uptake on a scientific and medical development.

Those unmistakably victimized by MPTP had varying fates…..’   See Langston’s popular book (co-authored with Jon Palfreman), The Case of the Frozen Addicts (Pantheon, 1995). Also see a Wired magazine article.

‘…..a relative latecomer in 1817…..’    See the above note about James Parkinson.

‘….natural’ PD …. a toxic condition?’     See Calne & Langston 1983.

‘….exposure to insecticides ….as a potential agent of PD …. not been firmly nailed down…’    See Brown et al. 2006.

Studies with monozygotic (identical) twins…..’    See Tanner et al. 1999; Tanner & Aston 2000.

Most cases of sporadic PD occur later in life….’     Only 1-3% of total PD cases can be attributable to direct genetic causes (See Lorinicz 2006).

‘…the origin of sporadic PD is complex…..’    See Burbulla & Krüger 2011; Wirdefeldt et al. 2011.

‘….MPTP itself is acted upon by a specific enzyme with the brain, monoamine oxidase….’     See Herraiz 2011 (a).

‘…..inhibitors of MAO enzymes are protective against the effects of MPTP…..’ Herraiz 2011 (b).

‘….also explains the high selectivity of MPTP (the precursor to MPP+) in its toxic action…’    For an early report on MPP+ uptake, see Javitch et al. 1985.

‘….it [MPP+] acts as a primary toxic agent towards mitochondria….’    For a little more detail on mitochondrial activity, see a previous post. For more on Complex I in general, and with respect to MPTP / MPP+, see Schapira 2010.

‘….epidemiologists have noted an unusual incidence ….ALS-PDC…’     For an entertaining account of the history of this topic, see The Island of the Colour-blind (Picador, 1996; Book Two, Cycad Island) by the famous neurologist Oliver Sacks. For a general overview of ALS-PD, see Steele 2005.

Cycad flour fed to experimental animals…..’    See Shen et al. 2010.

A specific amino acidBMAA….has been repeatedly linked with cycad-induced disease…’     For a review and disputation of this, see Snyder & Marler 2011.

Another contender is methylazoxymethanol….’     See Kisby et al. 2011.

A high incidence of an ‘atypical’ parkinsonism has been identified on the island of Guadeloupe….’    See Champy et al. 2004; Lannuzel et al. 2008.

‘….a specific compound from this fruit (annonacin) has implicated….’     See Champy et al. 2004; Lannuzel et al. 2008. Other compounds chemically related to annonacin have also been implicated: See Alvarez Colom et al. 2009.

‘…one aspect of the neuropathy induced by annonacin is abnormal neuronal tau behavior…’ See Escobar-Khondiker et al. 2007.

Next Post: This is the last post for 2011; will be back early next year.

Paradigms Revisited and Chemiosmosis

September 27, 2011

From time to time, it will be appropriate to offer updates (or upgrades) of previous posts when it seems appropriate. In late March, I looked at ‘paradigm shifts’ in biological science, particularly in the context of so-called biological ‘dark matter’. Here a Table was provided with a list of some developments in recent bio-history which could qualify as paradigm shifts, especially against the current background where the meaning of a scientific ‘paradigm’ has been diluted in much of the literature. While this Table was not originally intended to be completely comprehensive, after the fact I have noted that a particularly important case was inadvertently overlooked. That is the subject of the current post.

The Chemiosmotic Hypothesis

Cellular processes require energy, and a universal energy ‘currency’ is the molecule adenosine triphosphate (ATP). It has been long recognized that the hydrolysis of ATP to the corresponding diphosphate (ADP) provides the free energy for driving a host of biological reactions. The synthesis of ATP itself is therefore of crucial significance, and naturally requires an energy source in order for this to be accomplished.

In 1961, a British biochemist by the name of Peter Mitchell published a paper in Nature outlining a novel proposal for the mechanism of the generation of ATP  through the electrochemical properties established in certain biological membranes. These are found in prokaryotes, and also eukaryotes via their mitochondria (the ubiquitous organelles concerned with energy production) or chloroplasts (the plant cellular organelles mediating photosynthesis). Mitchell’s ‘chemi-osmotic’ hypothesis postulated that, rather than relying on an energy-rich chemical intermediary, oxidative phosphorylation (the synthesis of ATP from ADP occurring during respiration) was dependent on proton (hydrogen ion) flow across membranes. In essence, respiratory processes pump protons across an enclosed membrane boundary such that an electrical potential is generated across the membrane. Mitchell termed the ‘pull’ of protons back across the membrane as the ‘proton motive force’, or a proton current. This flow of protons could be directed through protein-mediated channels for the purposes of performing useful work.

Although now enshrined within the modern biochemical world-view, in the early 1960s this notion was quite radical, and not at all in tune with many of the ideas of most major researchers in the field at that time. In fact, it took over a decade a half before enough evidence was garnered to convince most remaining doubters. But Mitchell certainly had the last laugh, being awarded a Nobel Prize for his innovative proposal in 1978.

ATP Synthase and the Chemiosmotic Hypothesis

A remarkable catalytic complex at the core of ATP generation, the membrane-associated ATP synthase (ATPase), has had a central role in the ultimate acceptance of the chemiosmotic hypothesis. This resulted from studies on purified components of the synthase complex and reconstitution experiments, where directed proton flow across sealed model membranes (liposomes) was shown to be crucial for ATPase activity. In some ingenious experiments, the required proton flow was produced by the introduction of a protein involved with prokaryotic photosynthesis (bacteriorhodopsin) as a light-driven proton pump. (Other proton pumps from diverse biochemical sources could also perform similar roles). Such findings were subsequently reinforced by numerous structural and functional studies.

The ATPase has been revealed as a molecular motor driven by proton flow directed through the transmembrane (‘Fo’) component of the catalytic complex. The proton current is harnessed to provide energy for driving the physical rotation of the soluble (‘F1’) ATPase component, resulting in ATP synthesis at three catalytic sites. In some amazing cases of experimental virtuosity, this molecular rotation has been visualized in real time using fluorescent tags, and the association of rotation with ATP synthesis demonstrated by magnetic bead attachment to the F1 subunit, followed by artificial rotation induced by appropriate magnets.

The striking nature of the membrane-associated ATPase as a rotary molecular motor has inspired many offshoot thoughts and speculations. As a demonstration of a ‘natural nanomotor’, it would come as no surprise to hear that that the nascent field of nanotechnology has paid particular notice.

Why a Paradigm Shift?

So, it might be immediately seen that the proposal, experimental testing, and ultimate support for the chemiosmotic hypothesis is of great scientific significance, but is it really meaningful to refer to it as a paradigm shift? Well, yes, it is. Firstly, the initial resistance to this idea in itself is consistent with the view of shift in a paradigm requiring the upheaval and dismantling of an earlier view – if not by the death of an aging cadre of reactionary biologists, at least via their eventual accession to the concept through the accumulated weight of evidence.

But perhaps the most fundamental novelty of Mitchell’s ideas came from the inherent aspect of spatial organization of cellular structures in determining function, as he explicitly stated. In his own words, from his 1961 Nature paper:

 “the driving force on a given chemical reaction can be due to the spatially directed channelling of the diffusion of a chemical component or group along a pathway specified in space by the physical organization of the system”.

In other words, structures on a cellular scale (membranes, in this case) can serve as a basis for directing biochemical reactions in specific ways, and this general effect has also been termed ‘vectorial biochemistry’. This view was a radical proposal in the early 1960s – and accordingly met with considerable resistance.  In fact, cells are not just ‘bags of enzymes’, but partitioned in complex ways into different compartments, and this partitioning is very significant for specific functioning. This is particularly so (as we have seen) for bioenergetics.

The development of some form of membrane compartmentalization of proto-cells during the early stages of the origin of life is recognized as a major evolutionary transition. Its importance can be inferred from simple logic, since an evolving molecular biosystem could never undergo progressive selection and functional advancement were its components not restricted into a bounded spatial compartment. Dilution of reactants would otherwise rapidly remove any useful molecular innovations, and bring in potentially interfering molecules. Included among the latter are likely parasitic systems, whose unchecked activities would be a permanent stumbling block. But the long-term implications of the chemiosmotic principle show us that biological membranes are much more than just phospholipid sacks demarcating collections of biological molecules from the external environment. They are integral and essential parts of biological operations in their own right. And their evolution into these roles is a very ancient event in the history of life. Leaping from early biogenesis to future human aspirations, the importance of membranes and higher-level structures for vectorial direction of function should not be forgotten when artificial cell design is contemplated.

So Mitchell’s contribution is duly inserted into the original ‘paradigm shift’ Table thus:

It is also notable that this year marks the 50th anniversary of the publication of Mitchell’s seminal paper.


And finally, a biopoly(verse) salute to the pioneer:

The hypothesis chemiosmotic

Made Mitchell seem quirky and quixotic

But opinions revise,

And then a Nobel Prize

Sealed the field as no longer exotic.

References & Details

(In order of citation, giving some key references where appropriate, but not an exhaustive coverage of the literature).

‘……a British biochemist by the name of Peter Mitchell published a paper in Nature…’     See Mitchell 1961.

Mitchell’s hypothesis……’     For perspectives of both Peter Mitchell and the chemiosmotic hypothesis see Harold 2001 and Rich 2008.

‘….Mitchell ….. awarded a Nobel Prize for his innovative proposal in 1978.’    See Harold 1978; also the Nobel organization site for the 1978 Chemistry prize.   See also a relevant piece in Larry Moran’s Sandwalk blog.  Mitchell died in 1992.

‘…..studies on purified components of the synthase complex…..’.    A major contributor to these studies was Efraim Racker (1913-1991), A biographical memoir by Gottfried Schatz (National Academies Press, online) provides an excellent background to this and numerous related areas. Paul Boyer and John Walker also were pivotal in structure-function studies regarding ATP synthase, for which they received the Nobel Prize for Chemistry in 1997. For a very recent and comprehensive review of the membrane-associated rotary ATPase family, see Muench et al. 2011.

‘…..the introduction of a protein involved with prokaryotic photosynthesis….’    See Racker et al. 1975.

‘…..nanotechnology has paid particular notice….’    See Block 1997 (Article title “Real Engines of Creation”,  which refers to K. Erik Drexler’s book Engines of Creation, a pioneering manifesto of the potential for nanotechnology – Doubleday, 1986). Also see Knoblauch & Peters 2004.

‘…..artificial cell design…..’    See a previous post on synthetic genomes and cells for more on this cutting-edge topic.

Next Post: Regrettably, work commitments enforce a temporary hiatus on biopolyverse posts until early December. But will return then!!