Skip to content

Biological Parsimony and Genomics

August 24, 2015

The previous post discussed the notion that biological processes, and biosystems in general, exhibit a profound economy of organization and structure, which can be termed biological parsimony. At the same time, there are biological phenomena which seem to run counter to this principle, at least at face value. In this post, this ‘counterpoint’ theme is continued, with an emphasis on the organization of genomes. In particular, the genome sizes of the most complex forms of life (unlike simpler bacteria) superficially considerably exceed the apparent basic need for functional coding sequences alone.

Complex Life With Sloppy Genomes?

When it comes to genomics, prokaryotes are good advertisements for parsimony. They have small and very compact genomes, with minimal intergenic spaces and few introns. Since their replication times are typically very short under optimal conditions, the time and energy requirements for genomic replication are often significant selective factors, tending to streamline genomic sizes as much as possible. A major factor for the evolution of prokaryotic organisms is their typically very large population size, which promotes the rapid positive selection of small fitness gains. Prokaryotic genomes are thus under intense selection for functional and replicative simplicity, leading to rapid elimination of non-essential genomic sequences.

Yet the situation is very different for more complex biologies of eukaryotes, where genome sizes are commonly bigger by 1000-fold or more than that of the bacterial laboratory workhorse, E. coli. It is widely recognized that this immense differential is enabled in eukaryotic cells through the energy dividend provided by mitochondria, the organelles acting as so-called powerhouses of such cells. Mitochondria (and chloroplasts in plants) are intracellular symbiotes, descendents of ancient bacterial forms which entered into an eventual partnership with progenitors of eukaryotic cells, and in the process underwent massive genomic reduction. The energetic contribution of mitochondria enabled much larger cells, with concomitant much larger genomes.

If eukaryotic genomes can be ‘sloppy’, and accommodate very large tracts of repetitive DNAs deriving from parasitic mobile elements, or other non-coding sequences, where is the ‘parsimony principle’ to be found? We will return to this question later in this post, but first let’s look at some interesting issues revolving around the general theme of genomic size.

Junk is Bunk?

While a significant amount of genomic sequence in a wide variety of complex organisms is now known to encode not proteins but functional RNAs, genome sizes still seem much larger than what should be strictly necessary. This observation is emphasized by the findings of genomic sequencing projects, where complex organisms, including Homo sapiens, show what seems at first glance to be a surprisingly low count of protein-coding genes. In addition, closely related organisms can have markedly different genome sizes. These observations are directly pertinent to the ‘C-value paradox’, which refers to the well-documented disconnect between genome size and organismal complexity. Since genomic size accordingly appears to be arbitrarily variable (at least up to a point), much non-coding DNA has been considered by many in the field to be ‘junk’. In this view, genomic expansion (by duplication events or extensive parasitism by mobile genetic elements) has little if any selective impedance until finally limited by truly massive genomic sizes. In other words, the junk DNA hypothesis holds that genomes can accumulate large amounts of superfluous sequence which are essentially along for the ride, being replicated in common with all essential genomic segments. This trend is only restricted when genomes reach a size which eventually does impact upon the relative fitness of an organism. Thus, even the junk DNA stance concedes that genomes must necessarily be size-restricted, even though a lot of genomic noise can be tolerated before this point is reached.

It must be noted that the junk DNA viewpoint has been challenged, broadly along two separate lines. One such counterpoint holds that the apparent lack of function of large sectors of eukaryotic genomes is simply incorrect, since a great deal of the ‘junk’ sequences are transcribed into RNAs with a variety of essential cellular functions beyond encoding proteins. As noted above, there is no question that functional (non-coding) RNAs are of prime importance in the operations of all cellular life. At a basic level this has been known almost since the birth of molecular biology, since ribosomal RNAs (rRNAs) and transfer RNAs (tRNAs) have been described for many decades. These RNAs are of course essential for protein synthesis, and are transcribed from corresponding genomic DNA sequences.

But in much more recent times, the extent of RNA function has become better appreciated, to include both relatively short regulatory molecules (such as microRNAs [miRNAs]) and much longer forms (various functional non-coding species [ncRNAs]). While the crucial importance of these classes of nucleic acid functionalities is beyond dispute, the relevance of this to genome sizes is another matter entirely. To use the human genome as a case in point, even if the number of functional RNA genes was twice the size of the protein-coding set, the net genome size would still be much larger than required. While the proponents of the functional-RNA refutation of junk DNA have pointed to the evident transcription of most (if not all) of complex vertebrate genomes, this assertion has been seriously challenged by the Hughes (Toronto) lab as based on inadequate evidence.

Other viewpoints suggest that the large fraction of eukaryotic chromosomal DNA which is seemingly superfluous is in fact necessary, but without a strong requirement for sequence specificity. We can briefly consider this area in a little more detail.

Genomic Junk and Some ‘Indifferent’  Viewpoints

One of these proposals, the ‘skeletal DNA’ hypothesis, as largely promulgated by Tim Cavalier-Smith, side-steps the problem of whether much of the genome is superfluous junk or not, in favor of a structural role for the large non-genic component of the genome. Here the sequence of the greater part of the ‘skeletal’ DNA segments is presumed to be non-specific, where the main evolutionary selective force is for genomic size per se, irrespective of the sequences of non-genic regions. Where DNA segments are under positive selection but not in a sequence-specific manner, the tracts involved have been termed ‘indifferent DNA’, which seems an apt tag in such circumstances. Cavalier-Smith proposes that genomic DNA acts as a scaffold for nuclei, and thus nuclear and cellular size correlate with genome sizes. But at the time, DNA content itself does not directly alter proliferating cell volumes; rather the latter results from variation in encoded cell cycle machinery and signals (related to cellular concentrations of control factors).

Another proposal for the role of large non-coding genomic segments could be called the ‘transposable element shield’ theory. In this concept (originally put forward by Claudiu Bandea) so-called junk genomic segments reduce the risk that vital coding sequences will be subjected to insertional inactivation by parasitic mobile elements. Once it has been drawn to one’s attention, this proposal has a certain intuitive appeal. Thus, if 100% of a complex genome was comprised of demonstrably functionally sequences, then by definition any insertion by a parasitic transposable sequence element would knock out a function (or at least have a very high probability of doing so). If only 10% of the genome was of vital functional significance, and the rest a kind of shielding filler, then the insertional risk goes down by an order of magnitude. This model assumes that insertion of mobile elements is sequence-target neutral, or purely random in insertion site. Since this is not so for certain types of transposable elements, the Bandea proposal also encompasses the notion that protective genomic sequences are not necessarily arbitrary, but include sequences with a decoy-like function, to absorb parasitic insertions with reduced functional costs. Strictly speaking, then, this proposal is not fully ‘indifferent’ in referring to ‘junk’ DNA, but clearly is at least partially so. It should be noted as well that shielding against genomic parasitism is of significance for multicellular organisms with large numbers of somatic cells, as well as germline protection.

In the context of whether genomes increase in size by the accumulation of ‘junk’ or through selectable (but sequence-independent) criteria, it should be noted that a strong case has been made Michael Lynch and colleagues for the significance of non-adaptive processes in causing changes in genome size, especially in organisms with relatively low replicative population sizes (the opposite effect to large-population prokaryotes, as noted above). The central issue boils down to energetic and other functional costs – if genome sizes can expand with negligible or low fitness cost, passive ‘junk’ can be tolerated. But a ‘strong’ interpretation of the skeletal DNA hypothesis holds that genome sizes are as large as they are for a selectable purpose – acting as a nuclear scaffold.

In considering the factors influencing the genome sizes of complex organisms, some specific cases in comparative genomics are useful to highlight, as follows.

Lessons from Birds, Bats, and Other ‘Natural Experiments’

Modern molecular biology has allowed the directed reduction of significant sections of certain bacterial genomes for both scientific and technological ends. But some ‘natural experiments’ have also revealed very interesting aspects of vertebrate genomes.

One such piece of highly significant information comes from studies of the genomes of vertebrates that are true fliers, as found with birds and bats. Such organisms are noted collectively for their significantly smaller genomes in comparison to other vertebrates, especially other amniotes (reptiles and mammals). The small-genome / flight correlation has even been proposed for long-extinct ancient pterosaurs, from studies of fossil bone cell sizes. In the case of birds, genome size reduction has been assigned as stemming from loss of repetitive sequences, deletions of certain genomic segments, and (non-essential) gene loss.

A plausible explanation for the observed correlation between the ability to fly and smaller genomes is the high-level metabolic demand of flight. This dictate is argued to favor streamlined genomes, via the reduction in replicative metabolic costs. Supporting evidence for such a contention is provided by the negative correlation between genome size and metabolic rate in all tetrapods (amphibians, reptiles, birds, and mammals), where a useful measure of oxidative metabolic rate is the ‘heart index’, or the ratio of heart mass to body weight. Even among birds themselves, it has been possible to show (using heart indices) negative correlations between metabolic rates and genomic sizes. Thus, highly active fliers with relatively large flight muscle quantities tend to have smaller genomes than more sedate fliers, with hummingbirds (powerhouses of high-energy hovering flight) having the smallest genomes of all birds.

It was stated earlier that closely related organisms can have quite different genome sizes, and the packaging of genomes in such cases can also differ markedly. The Indian muntjac deer has a fame of sorts among cytogeneticists, owing to the extremely low size of its chromosome count relative to other mammals (only 6 diploid chromosomes in females, with an extra one in males). Indeed, the Chinese muntjac has a more usual diploid chromosome count of 46, and yet this deer is closely enough related to Indian muntjacs that they can interbreed (albeit with sterile offspring, reminiscent of mules produced through horse-donkey crosses). The Indian muntjac genome is believed to be the result of chromosomal fusions, with concomitant deletion of significant amounts of repetitive DNAs, and reduction in certain intron sizes. As a result, the Indian muntjac genome is reduced in total size by about 22% relative to Chinese muntjacs.

This illustration from comparative genomics once again suggests that genome size alone cannot be directly related to function. Although the link between numbers of distinct functional elements and complexity might itself be inherently complex, it is reasonable to contemplate what degrees of molecular function are required to build different organisms. If all genomes were entirely functional and ‘needed’, then much more genomic sequence is required to build lungfishes, onions, and many other plants than human beings.

Junk vs. Garbage

A common and useful division of items that are nominally ‘useless’ has been noted by Sydney Brenner. He pointed out that most languages distinguish between stuff that is apparently useless yet harmless (‘junk’), and material that is both useless and problematic or offensive in some way (‘garbage’). An attic may accumulate large amounts of junk which sits there, perhaps for decades, without much notice, but useless items which become odoriferous or take up excessive space are promptly disposed of. The parallel he was making with genomic sequences is clear. ‘Garbage sequences’ that are, or become, deleterious in some way are rapidly removed by natural selection, but this does not apply to sequences which are merely ‘junk’.

Junk sequences thus do not immediately impinge upon fitness, at least in organisms with low population sizes. Also, ‘junk’ may be co-opted during evolution for a true functional purpose, as with the full ‘domestication’ of otherwise parasitic mobile elements. Two important points must be noted with respect to the domestication of formerly useless or even deleterious sequence elements: (1) just because some mobile element residues have become domesticated, it does not at all follow that all such sequences are likewise functional; and (2) the co-option (or ‘exaptation’) of formerly useless DNA segments does not in any way suggest that evolution has kept such sequences ‘on hand’ on the off-chance they might find a future use.

Countervailing Trends for Genomic Size

How do complex genomes expand in size, anyway? Duplication events are a frequent contributor towards such effects, and these processes can range from local effects on relatively small segments, to whole genes, and even entire genomes. The latter kind of duplication leads to a state known as polyploidy, which in some organisms can become a surprisingly stable arrangement.

Yet the major influence on genomic sizes in eukaryotes is probably the activity of parasitic mobile (transposable) elements, such that a correlation between genomic size and their percent constitution by such elements has been noted. It has been suggested that although in some cases very large genomes with a high level of transposable elements appear to be deleterious (notably certain plants believed to be on the edge of extinction), in other circumstances (large animal genomes as seen with salamanders and lungfish) a high load of transposable elements may be tolerated without significant fitness loss. The latter effect has been attributed to a slow acquisition of the mobile elements, whereby their continued spread tends to be inactivated by mutation or other (‘sequence decay’ mechanisms. This in itself can be viewed from the perspective of the ‘garbage/junk’ dichotomy: at least some transposable elements that remain active may be deleterious, and thus suitable for relegation into the ‘garbage’ box, while inactivated elements are more characteristic of ‘junk’.

Yet there is documented evidence indicating a global trend in evolution towards genome reduction, in a wide diversity of organisms. When this pattern is considered along with factors increasing genomic size, it has been proposed that the overall picture is biphasic. In this view, periods of genomic expansion in specific lineages are ‘punctuated’ not by stasis (as the original general concept of ‘punctuated equilibrium’ proposed) but with slow reduction in genomic sizes. Though the metabolic demands of flying vertebrates may place special selective pressures towards genomic reduction, a general trend towards genomic contraction suggests that selection always tends to favor smaller and more efficient genomes. Even where the selective advantage of genome size is small and subtle, over evolutionary time it will be inevitably exerted with the observed results. But at the same time, genomic copy-errors (from small segments to whole genes to entire genomes) and parasitic transposable elements act as an opposing influence towards genomic expansion. And in this context, it is important to recall the above notes (from Michael Lynch and colleagues) with respect to the importance of organismal population size in terms of the magnitudes of the selective pressures dictating the streamlining of genomes.

A human genome-reduction project (actually rendered much more feasible by the advent of new genome-editing techniques) could presumably produce a fully-functional human with a much smaller genome, but such a project would be unlikely to pass the scrutiny of institutional bioethics committees. (Arbitrary deletions engendered by blind natural selection will either be positively selected or not; a human project with the tools to reduce genome size would often lack 100% certainty that a proposed deletion would not have deleterious effects). Yet apart from this, we might also ask whether such engineered humans would have an increased risk of somatic cell mutagenesis via transposable elements (leading to cancer), if the Bandea theory of genomic shielding of transposable elements holds water.

Now, what then for parsimony in the light of the cascade of genomic information emerging in recent times?

Thrifty Interactomes?

If the junk DNA hypothesis was truly wrong in an absolute sense (that is, if all genomes were constituted from demonstrably functional sequences), then the parsimony principle might still hold at the genomic level. Here one might claim that all genomic sequences are parsimonious to the extent that they are functionally relevant, and therefore genomes are as large as functionally necessary, but no larger. Yet an abundance of evidence from comparative genomics (as discussed briefly above) suggests strongly that this intrepretation is untenable. But if a typical eukaryotic energy budget derived from mitochondria allows a ‘big sloppy genome’, where does the so-called parsimony principle come in?

The best answer to this comes not from genomic size per se, but gene number and the organization of both gene expression and gene expression products. Consider some of the best-studied vertebrate genomes, as in the Table below. If protein-coding genes only are considered, both zebrafish and mice have a higher count than humans. Nevertheless, as noted above, it is now known that non-coding RNA, both large and small, are very important. If these are noted, and a combined ‘gene tally’ thus calculated, we now find Homo sapiens coming out on top. More useful still may be the count for gene transcripts in general, since these include an important generator of genomic diversity: differential gene splicing.

ComparativeGeneCounts-TABLE

_________________________________________________________________________

But what does this mean in terms of complexity? Are humans roughly only twice as complex as mice, or roughly three times as complex as a zebrafish? Almost certainly there is much more to the picture than that, since these superficial observations belie what is likely to be the most significant factor of all: the way expressed products of genomes (both proteins and RNAs) interact, which can impose many hidden layers of complexity onto the initial expression toolkit. These patterns of interactions comprise an organism’s interactome.

How many genes does it take to build a human? Or a mouse, or a fish? As noted earlier in this post, in the aftermath of the first results for the sequencing of the human genome, and numerous other genomes soon afterward, many onlookers expressed great surprise at the ‘low’ number of proteins apparently encoded by complex organisms. Other observers pointed out in turn that if it is not known how to build a complex creature, how could one know what an ‘appropriate’ number of genes should be? Still, a few tens of thousands of genes does seem a modest number, even factoring in additional diversity-generating mechanisms such as differential splicing. At least, this would be the case if every gene product had only a single, unique role in the biology of an organism – but this is manifestly not so.

In fact, single proteins very often have multiple roles, in multiple ways, via the global interactome. An enzyme, for example, may have the same basic activity, but quite distinct roles in cells of distinct differentiation states. Other proteins can exhibit distinct functional roles (‘moonlighting’) in different circumstances. It is via the interactome, then, that genomes exhibit biological parsimony, to a high degree.

This ‘interactomic’ theme will be developed further in the succeeding post.

Some Parsimonious Conclusions

(1) Prokaryotic genomes have strong selective pressures towards small size.

(2) Eukaryotic genomes can expand to much larger sizes, with considerable portions of redundant or non-essential segments, by mechanisms that may be non-adaptive or positively selected (skeletal DNA, transposable element shielding). Such processes include duplication of specific segments (gene duplication) or even whole-genome duplication (polyploidy). This may countered by long-term evolutionary trends towards genome reduction, but the ‘expandability’ of eukaryotic genomes (as opposed to prokaryotes) still remains.

(3) The expressed interactomes of eukaryotes are highly parsimonious.

(4) Biological parsimony is a natural consequence of strong selective pressures, which tend to drive towards biosystem efficiency. But the selective pressures themselves are linked to the energetics of system processes, and population sizes. Thus, a biological process (case in point: genome replication) within organisms with relatively small populations and moderate energetic demands (many vertebrates) may escape strong selection for efficiency, and be subjected to genetic drift and genomic expansion, with a slow counter-trend towards size reduction. An otherwise tolerable process in terms of energetic demands (genome replication once again) may become increasingly subject to selective pressure towards efficiency (size contraction) if an organism’s metabolic demands are very high (as with flying vertebrates).

(5) Based on internal functions alone, it might be possible to synthetically engineer a complex multicellular eukaryote where most if not all of its genome had a defined function, but such an organism would likely be highly vulnerable outside the laboratory to disruption of vital sequences through insertion of parasitic mobile elements.

And to conclude, a biopolyversical rumination:

There are cases of genomes expanding

Into sizes large and outstanding

Yet interactomes still show

That parsimony will grow

Via selective pressures demanding

References & Details

(In order of citation, giving some key references where appropriate, but not an exhaustive coverage of the literature).

They have small and very compact genomes, with minimal intergenic spaces and few introns.’     In cases where conventional bacteria have introns, they are frequently ‘Group I’ introns in tRNA genes, which are removed from primary RNA transcripts by self-splicing mechanisms. The ‘third domain of life’, the Archaeal prokaryotes, have tRNA introns which are removed via protein catalysts. See Tocchini-Valentini et al. 2015.

‘….their replication times are typically very short under optimal conditions….’     E. coli can replicate in about 20 minutes in rich media, for example. But not all prokaryotes are this speedy, notably some important pathogens. Mycobacterial doubling times are on the order of 16-24 hr for M. tuberculosis (subject to conditions) or as slow as 14 days for the causative agent of leprosy, M. leprae. For an analysis of the genetics of fast or slow growth in mycobacteria, see Beste et al. 2009. For much detail on Mycobacterium leprae, see this site.

A major factor for the evolution of prokaryotic organisms is their typically very large population size……’     For excellent discussion of these issues, see work from the lab of Michael Lynch, as in Lynch & Conery 2003.

‘…..this immense differential is enabled in eukaryotic cells through the energy dividend provided by mitochondria……’    See Lane & Martin 2010; Lane 2011.

‘……Mitochondria …… entered into an eventual partnership with progenitors of eukaryotic cells, and in the process underwent massive genomic reduction….’     Human mitochondrial genomes encode only 13 proteins. For a general and very detailed discussion of such issues. See Nick Lane’s excellent book, Power, Sex, Suicide (Oxford University Press, 2005.

The energetic contribution of mitochondria enabled much larger cells, with concomitant much larger genomes.’     In the words of the famed bio-blogger PZ Myers, ‘a big sloppy genome’ [a post commenting on the hypothesis of Lane & Martin 2010]

‘….complex organisms, including Homo sapiens, show what seems at first glance to be a surprisingly low count of protein-coding genes.’      See (for example) the ENSEMBLE genomic database.

‘…..closely related organisms can have markedly different genome sizes.’     See Doolittle 2013.

‘….even if the number of functional RNA genes was twice the size of the protein-coding set, the net genome size would still be much larger than required.’      The study of Xu et al. 2006 provides (in Supplementary Tables) the striking contrast between the estimated % of coding sequences and genome sizes for a range of prokaryotes and eukaryotes. Although slightly dated in terms of current gene counts, the low ratios of coding sequences in most of the sampled eukaryotes (especially mammals( would stand if even doubled. By the same token, with prokaryotes, a direct correlation exists between coding DNA and genome size, but this relationship falls down for eukaryotes above a certain genome size (0.01 Gb, where the haploid human genome is about 3 Gb; see Metcalfe & Casane 2013).

‘….the proponents of the functional-RNA refutation of junk DNA have pointed to the evident transcription of most if not all of complex vertebrate genomes…..’     The ENCODE project ignited much controversy by asserting that the notion of junk DNA was no longer valid, based on transcriptional and other data. (See Djebali et al. 2012; ENCODE Project Consortium 2012).  The ‘junk as bunk’ proposal has itself been comprehensively debunked by Doolittle (2013) and Graur et al. 2013.

 ‘….. this assertion [widely encompassing genomic transcription] has been seriously challenged as based on inadequate evidence.’     See Van Bakel et al. 2010.

‘…..skeletal DNA hypothesis, as largely promulgated by Tim Cavalier-Smith….’     See Cavalier-Smith 2005.

‘…..this concept (originally put forward by Claudiu Bandea) …..’      See a relevant online Bandea publication.

‘…..shielding against genomic parasitism is of significance for multicellular organisms…..’      Regardless of the veracity of the Bandea hypothesis, a variety of genomic mechanisms for protection from parasitic transposable elements have evolved; see Bandea once more.

Where DNA segments are under positive selection but not in a sequence-specific manner, the tracts involved have been termed ‘indifferent DNA…..’      See Graur et al. 2013.

‘….a strong case has been made Michael Lynch and colleagues for non-adaptive changes in genome size….’      See Lynch 2007.

‘….molecular biology has allowed the directed reduction of significant sections of certain bacterial genomes ….’      For work on genome reduction in E. coli, see Kolisnychenko et al. 2002; Pósfai et al. 2006. For analogous work on a Pseudomonas species see Lieder et al. 2015. The Venter have (famously) worked on synthetic genomes, which allows the most direct way of establishing the minimal genome for a prokaryotic organism. With respect to this, see Gibson et al. 2010.

‘…birds and bats. Such organisms are noted collectively for their significantly smaller genomes in comparison to other vertebrates.‘     For avian genomes, see Zhang et al. 2014; for bats, see Smith & Gregory 2009. ‘…small-genome / flight correlation has even been proposed for long-extinct ancient pterosaurs’   See Organ & Shedlock, 2009. In this study it was found that ‘megabats’ (larger, typically fruit-eating bats lacking sonar) are even more constrained in terms of genomic size than microbats.

In the case of birds, genome size reduction has been assigned……’     For details in this area, see Zhang et al. 2014.

‘…..evidence of a negative correlation between genome size and metabolic rate …..A measure of oxidative metabolic rate is the ‘heart index…..’      See Vinogradov & Anatskaya 2006.

‘…highly active fliers with large relative flight muscle quantities tended to have smaller genomes than more sedate fliers. ‘      See Wright et al. 2014.

‘…hummingbirds (powerhouses of high-energy hovering flight) having the smallest genomes of all birds…’      See Gregory at al. 2009.

‘…..the Indian muntjac genome is reduced in total size by about 22% relative to Chinese muntjacs…..’      The Indian muntjac genome is about 2.17 Gb; the Chinese muntjac genome is about 2.78 Gb. See Zhou et al. 2006; Tsipouri et al. 2008.

‘……much more genomic sequence is required to build lungfishes, onions, and many plants than human beings.’     The note regarding onions comes from T. Ryan Gregory (cited as a personal communication by Graur et al. 2013). For lungfish and many other animal genome sizes, see a comprehensive database (overseen by T.R. Gregory). For plant genomes, see another useful database.

‘….A common and useful division of items that are nominally ‘useless’ has been noted by Sydney Brenner.‘      See Brenner 1998. This ‘junk / garbage’ distinction was alluded to by Graur et al. 2013.

‘…… ‘junk’ may be co-opted during evolution for a true functional purpose, as with the full ‘domestication’ of otherwise parasitic mobile elements……’     See Hua-Van et al. 2011.

‘…. because some mobile element residues have become domesticated, it does not at all follow that all such sequences are likewise functional.’      This point has been emphasized by Doolittle 2013.

‘…..a state known as polyploidy…….’      For an excellent review on many aspects of polyploidy, see Comai 2005.

‘……a correlation between genomic size and their percent constitution by such [mobile] elements has been noted.‘ See Metcalfe & Casane 2013.

‘…..has been suggested …….. very large genomes with a high level of transposable elements appear to be deleterious …… in other circumstances ……a high load of transposable elements may be tolerated….’      See Metcalfe & Casane 2013.

‘……documented evidence indicating a global trend in evolution towards genome reduction….’ | ‘…..it has been proposed that the overall picture is biphasic. Periods of genomic expansion in specific lineages are ‘punctuated’ not by stasis (as the original general concept of ‘punctuated equilibrium’ proposed) but with slow reduction in genomic sizes. ‘     See Wolf & Koonin 2013. For a background on the theory of punctuated equilibrium, see Gould & Eldredge 1993.

‘…..human genome-reduction project (actually rendered much more feasible by the advent of new genome-editing techniques)……’      There is so much to say about these developments (including zinc finger nucleases, TALENs, and in particular CRISPR-Cas technology) that it will form the subject of a future post.

‘ ENSEMBLE Dec 2013 release‘     (Table ) See the ENSEMBLE database site.

These patterns of interactions comprise an organism’s interactome.’      Note here that the term ‘interactome’ can be used in a global sense, or for a specific macromolecule. Thus, a study might refer to the ‘interactome of Protein X’, in reference to sum total of interactions concerning Protein X in a specific organism.

Next post: September.

Parsimony and Modularity – Key Words for Life

April 21, 2015

Sometimes Biopolyverse has considered aspects of life which may be generalizable, such as molecular alphabets. This post takes a look at another aspect of complex life which is universal on this planet, and unlikely to be escapable by any complex biology. The central theme is the observation that the fundamental processes of life have an underlying special kind of economy, which may be termed biological parsimony. Owing its scope and diversity, this will be the first of a series dealing with this issue. Here, we will look at the general notion of parsimony in a biological context, and begin to consider why such arrangements should be the rule. Some biological phenomena would seem to challenge the parsimony concept, and in this initial post we will look at certain features of the protein universe in this respect.

Thrifty Modules

In the post of January 2014, the role of biological parsimony in the generation of complexity was briefly referred to. The fundamental issue here concerns how a limited number of genes could give rise to massively complex organisms, by means of processes that shuffle and redeploy various functional components. Thus, the ‘thrifty’ or parsimonious nature of biological systems is effectively enabled by the modularity of a basic ‘parts list’. A modest aphorism could thus state:

“Parsimony is enabled by Modularity; Modularity is the partner of Parsimony”

 

The most basic example of modularity in biology can be found with molecular alphabets, which were considered in a recent post. Generation of macromolecules from linear sequence combinations of subunits from a distinct and relatively small set (an ‘alphabet’ in this context) has a clear modular aspect. Subunit ‘letters’ of an alphabet can be rearranged in a vast number of different strings, and it is this simple principle which gives biological alphabets immense power as a fundamental tool underlying biological complexity.

This and several other higher-level modular aspects of biological systems are outlined in Table 1 below.

 

BioModularLevels

Table 1. Major Levels of Biological Modularity.

  1. Molecular alphabets: For an extended discussion of this theme, see a previous post. The modularity of any alphabet is implicit in its ability to generate extremely large numbers of strings of variable length, with specific sequences of the alphabetic ‘letters’.
  2. Small molecular scaffolds: Small molecules have vital roles in a wide variety of biological processes, including metabolic, synthetic, and regulatory activities. In numerous cases, distinct small biomolecules share common molecular frameworks, or scaffolds. The example given here (perhydrocyclopentanophenanthrene skeleton) is the core structure for cholesterol, sex hormones, cardiac glycosides, and steroids such as cortisone.

PCPP-skeleton

  1. Protein folds: Although a large number of distinct protein folds are known, some in particular have been ‘used’ by evolution for a variety of functions. The triosephosphate isomerase (TIM) (β α)8 -barrel fold (noted as the example in the above Table) has been described as the structural core of >170 encoded proteins in the human genome alone.
  2. Alternate splicing / differential intron & exon usage: The seemingly low numbers of protein-encoding genes in the human genome is substantially boosted by alternate forms of the splicing together of exonic (actual coding) sequence segments from single primary transcripts. This can occur by skipping or incorporation of specific exons. Also, the phenomenon of intron retention is another means of extending the functionality of primary transcripts.
  3. Alternate / multiple promoters: Many gene products are expressed in different tissues or different developmental stages in multicellular organisms. This is often achieved through single promoters subject to differential activating or repressing influences, such as varying transcription factors, or negative regulation through microRNAs (miRNAs). Another way of extending the versatility of a single core gene is seen where greater than one promoter (sometimes many) are upstream of a core coding sequence. With this arrangement, the regulatory sequence influences on each promoter can be clearly demarcated, and transcripts from each alternate promoter can be combined with alternate splicing mechanisms (as above with (4), often with the expression of promoter-specific 5’ upstream exons. A classic example of this configuration is found with the microphthalmia gene (MITF) which has many isoforms through alternate promoters and other mechanisms.
  4. Recombinational Segments: As a means of increasing diversity with a limited set of genomic sequences, in specific cell lineages recombinational mechanisms can allow a combinatorial assortment of specific coding segments to produce a large number of variants. The modularity of such genetic sequences in these circumstances is obvious, and is a key feature of the generation of diversity by the vertebrate adaptive immune system.
  5. Protein complex subunits: Protein-protein interactions are fundamental to biological organization. There are many precedents for complexes made up of multiple protein subunits having distinct compositions in different circumstances. Thus, a single stimulus can signal very different results in different cellular backgrounds, associated with different protein complexes being involved in their respective signaling pathways. Enzymatic complexes, such as those involved in DNA repair, can also show subunit-based modularity.
  6. Cells: From a single fertilized zygote, multicellular organisms of a stunning range of shapes and forms can be grown, based on differentiation and morphological organization. Thus, cellular units can be considered a very basic form of biological modularity.

Discussion of both small molecule and macromolecular instances of modularity / parsimony will be extended in succeeding posts.

Some of these modularity levels are interlinked in various ways. For example, the evolutionary development of modular TIM barrels may have been enhanced by alternate splicing mechanisms. Indeed, the latter process may be of general evolutionary importance, particularly in the context of gene duplications. In such circumstances, one gene copy can evolve novel functions (subfunctionalization) sometimes associated with the use of alternate splice variation.

* Certainly this Table is not intended to be comprehensive with respect to modularity mechanisms, but illustrates some major instances as pertinent examples.

___________________________________________________________________

 

When a person is referred to as ‘parsimonious’, there are often connotations of miserliness, or a suggestion that the individual in question is something of a skinflint. In a biological context, on the other hand, the label of parsimony is nothing but a virtue, since it is closely associated with the efficiency of the overall biological system.

Pathways to Parsimony

When modular components can be assembled in different ways for different functions, the outcome is by definition more parsimonious than producing distinct functional forms for each task. An alphabetic system underlies the most fundamental level of parsimony, but numerous high-order levels of parsimonious assembly can also exist, as Table 1 indicates.

Evolution itself is highly conducive to parsimony, simply owing to the fact that multiple functional molecular forms can be traced back to a common ancestor which has diversified and branched through many replicative generations. As noted in the footnotes to Table 1, gene duplication (or even genome duplication) is a major means by which protein evolution can occur, via the development of functional variants in the ‘spare’ gene copies. It is the ‘tinkering’ nature of evolution which produces a much higher probability that pre-existing structures will be co-opted into new roles than entirely novel structures developed.

But there is a second evolutionary consideration in the context of biological parsimony, and that is where bio-economies, or bioenergetics, comes to the forefront. Where biosystems are in replicative competition, it is logical to assume that a system with the most efficient means of copying itself will predominate over rivals with relatively inferior processes. And the copying mechanism will be underwritten by the entire metabolic and synthetic processes used by the biosystem in question. Efficiency will thus depend on how streamlined the biosystem energy budget can be rendered, and the most parsimonious solutions to these questions will thus be evolutionarily favored.

If evolution is a truly universal biological feature (as postulated within many definitions of life) then bioparsimony is accordingly highly likely to be a universally observed principle in any biological system anywhere in the universe.

Counterpoints and Constraints: Protein Folding

 Certain observations might seem to run in a contrary fashion to the proposed fundamental nature of parsimony and modularity in biology. Let’s initially take a look at protein folding as an initial case in point.

Folds and Evolution

Table 1 highlights the modularity of certain protein folds, but this is certainly not a ubiquitous trait within the protein universe. On the one hand we can cite the instances of specific protein folds which are widespread in nature, fulfilling many different catalytic or structural functions (as with the TIM-barrel fold; Table 1). Yet at the same time, it is true that many folds (>60%) are restricted to one or two functions.

While all proteins may ultimately be traceable back to a very limited set of prototypical forms (if not a universal common ancestor in very early molecular evolution), it appears that some protein folds are much more amenable to evolutionary ‘tinkering’ than others. This has been attributed to structural aspects of certain folds, in particular a property which has been termed ‘polarity’. In this context, polarity essentially refers to a combination of a highly ordered structural scaffold encompassing loop regions whose packing within the total fold is relatively ‘loose’ and amenable to sequence variation.

It follows logically that if mutations in Fold A have a much higher probability of creating novel activities than mutations in Fold B, then variants of Fold A will be more likely to expand evolutionarily (through gene duplication or related mechanisms). Here the TIM-barrel motif is a representative star for the so-called ‘Fold A’ set, which in turn are exhibitors of the polarity property par excellence.

While some natural enzymatic activities are associated with single types of folds, in other cases quite distinct protein folds can mediate the same catalytic processes. (Instances of the latter are known as analogous enzymes). It does not necessarily follow, however, that the absence in nature of an analogous counterpart for any given protein catalyst indicates that an alternative folding solution for that particular catalytic activity is not possible per se. In such circumstances, a potentially viable alternative structure (another polypeptide sequence with a novel fold constituting the potential analogous enzyme) has simply never arisen through lack of suitable evolutionary antecedents.

By their nature, the blind processes of natural selection on a molecular scale will favor certain protein folds simply by virtue of their amenability to innovation. If every catalytic or structural task could be competitively fulfilled by only a handful of folds, the protein folding universe would likely show much less diversity than is noted in extant biology. Evolution of novel folds will be favored when they are more efficient for specific tasks than existing structures. All of this is underpinned by the remarkable parsimony of the protein alphabet, especially when one reflects upon the fact that an astronomical number of possible sequences can be obtained with a linear string of amino acids corresponding to even a small protein.

Parsimony and Necessity

 Although so far this musing on parsimony and modularity has barely scratched the surface of the topic as a whole, at this point we can round off this post by considering briefly why parsimonious bio-economies should be so ubiquitously observed.

Some aspects of biology which inherently invoke parsimony may be in themselves fundamentally necessary for any biological system development. For example, molecular alphabets appear to be essential for biology in general, as argued in a previous post. Likewise, while construction of complex macroscopic organisms from a relatively small set of cell types, themselves differentiated from a single zygote, can be viewed as a highly parsimonious system, there may be no other feasible evolutionary pathway which can produce comparable functional results.

But, as indicated by the above discussion of protein folds, other cases may not be quite so clear-cut, and require further analysis. Complex trade-offs may be involved, as with the factors determining genome sizes, which we will address in the succeeding post.

It is clear that evolutionary selection for energetic efficiency is surely a contributing factor to a trend towards biological parsimony, as also noted above. But apart from bioenergetics, one might propose factors in favor of parsimony which relate to the informational content of a cell. Thus, if every functional role required for all cellular activities (replication in particular) was represented by a completely distinct protein or RNA species, it could be speculated that the resulting scale-up of complexity would place additional constraints on functional viability. A great increase in all molecular functional mediators might be commensurate with a corresponding increase in deleterious cross-interactions, solutions for which might be difficult to obtain evolutionarily. Of course, such a ‘monomolecular function’ biosystem would be unlikely to arise in the first place, when competing against more thrifty alternatives. The latter would tend to differentially thrive through reduced energetic demands, if not more ready solutions to efficient interactomes. Consequently, it probably comes down to bioenergetics once more, if a little more indirectly.

Finally, a bio-polyverse salute to the so-called parsimony principle in biology:

Evolution can tinker with bits

In ‘designing’ selectable hits

Modular innovation

Is a route to creation

Thus parsimony works, and it fits.

 

References & Details

(In order of citation, giving some key references where appropriate, but not an exhaustive coverage of the literature).

Some of the issues covered in this post were considered in the free supplementary material for Searching for Molecular Solutions, in the entry: SMS-Extras for Ch. 9 (Under the title of Biological Thrift).

Table 1 Footnote references:

Small molecules have vital roles in a wide variety of biological processes……’      See the above supplementary downloadable material (Searching for Molecular Solutions –Chapter 9).

‘…..The triosephosphate isomerase (TIM) (β α)8 -barrel fold is known as the structural core of >170 encoded proteins….’      See Ochoa-Levya et al. 2013. Additional folds accommodating diverse functions are noted in Osadchy & Kolody 2011.

A classic example of this [alternate promoter] configuration is found with the microphthalmia gene (MITF)….’      See SMS-Extras (as noted above; Ch.9); also Shibahara et al. 2001.

The modularity of such genetic sequences in these circumstances is obvious, and is a key feature of the generation of diversity by the vertebrate adaptive immune system.’      For a general and search-accessible overview of immune systems, see the text Immunobiology 5th Edition. For an interesting recent hypothesis on the origin of vertebrate adaptive immunity, see Muraille 2014.

‘…..a single stimulus can signal very different results in different cellular backgrounds….’   /   ‘ Enzymatic complexes, such as those involved in DNA repair, can also show subunit-based modularity.’      To be continued and expanded in a subsequent post with respect to parsimony involving proteins and their functions.

‘…..the evolutionary development of modular TIM barrels may have been enhanced by alternate splicing mechanisms.’      See Ochoa-Levya et al. 2013.

‘….the latter process [alternate splicing] may be of general evolutionary importance, particularly in the context of gene duplications…..’      See Lambert et al. 2015.

If evolution is truly universal (as postulated within many definitions of life) …..’      See Cleland & Chyba 2002.

‘……many folds (>60%) are restricted to one or two functions.’     See Dellus-Gur et al. 2013; Tóth-Petróczy & Tawfik 2014.

‘…..some natural enzymatic activities are associated with single types of folds…’      An example is dihydrofolate reductase (cited also in Tóth-Petróczy & Tawfik 2014), the enzymatic activity of which is mediated by a fold not used by any other known biological catalysts.

‘…..a property which has been termed ‘polarity’ ….’      These concepts have been promoted by Dan Tawfik’s group. See Dellus-Gur et al. 2013; Tóth-Petróczy & Tawfik 2014.

‘….in other cases quite distinct protein folds can mediate the same catalytic processes. (Instances of the latter are known as analogous enzymes).’      See Omelchenko et al. 2010.

‘…..an astronomical number of possible sequences can be obtained with a linear string of amino acids corresponding to even a small protein.‘     See an earlier post for more detail on this.

‘…..molecular alphabets appear to be essential for biology in general…..’ See also Dunn 2013.

Next Post: August.

Evolutionary Constraints, Natural Bioengineering, and ‘Irreducibility’

January 25, 2015

Many prior biopolyverse posts have concerned evolutionary themes, either directly or indirectly. In the present offering, we consider in more detail factors which may limit what evolutionary processes can ‘deliver’. More to the point, are there biological structures which we can conceive, but which could never be produced through evolution, even in principle?

It has been alleged by proponents of so-called ‘Intelligent Design’ (ID) that some features of observable biology are so complex that no intermediate precursor forms can be envisaged in a feasible evolutionary pathway. Of course, the hidden (or not so hidden) agenda with such people is the premise that if natural biological configurations of sufficient complexity exist such that they are truly ‘irreducible’, then one must look to some form of divine intervention to kick things along. In fact, all such ‘irreducibly complex’ examples proffered by such parties have been convincingly demolished by numerous workers with more than a passing familiarity with the mechanism of evolution.

These robust refutations in themselves cannot prove that there is no such thing as a truly evolutionarily irreducible structure in principle. What is needed, then, is not to attempt to find illusory non-evolvable biological examples in the observable biosphere, but to identify holes in the existing functional and structural repertoire as manifested by all living organisms collectively. Biological ‘absences’ could result from two broad possible scenarios: features which are possible, but not present simply due to the contingent nature of evolutionary pathways, and features which have not appeared because there is no feasible route by which they could arise. (Perhaps a third possibility would exist for ID enthusiasts, whereby God had inscrutably chosen not to create any truly irreducible biological prodigies). Of course, deciding between the ‘absent but possible’ and ‘absent and never feasible’ alternatives is not always going to be simple, if indeed it ever is.

The Greatest Show On Any Planet

Richard Dawkins has called it the Greatest Show on Earth. Sean Carroll used words of Darwin himself, “endless forms most beautiful”. These and many other authors have been struck by the incredibly diverse array of living creatures found in a huge variety of terrestrial environments. With the great insights triggered by the labors of Darwin and Wallace, all of this biological wonder can be seen as having been shaped and molded by the blind and cumulative hand of natural selection. And once understood, selective processes can be seen to operate in a universal sense, from single molecules to the most complex arrangements of matter, as long as each entity possesses the means for its own replication. It is for this reason that Darwinian evolution has been proposed as a universal hallmark of life anywhere, whatever form its replicative essence may take. While there may be few things which are truly universal in a biological sense (see a previous post for the view that molecular alphabets are one such case in point), it is hard to escape the conclusion that change through evolution and life go hand-in-hand, no matter what form such life may take.

So where do the ‘endless’ outpourings of biological design innovations ever reach some kind of end-point? There is a classic example that can be considered at this point.

Unmakeable?

It has often been claimed that a truly human invention unrepresented in nature is the wheel, and this absence has been proposed as a possible true case of ‘irreducible complexity’. At the molecular level, however, wheel-like structures have been documented. Three such cases are known, all rotary molecular motors: the bacterial flagellum, and two component molecular motors of ATP synthase. Remarkable as the latter structures are, it is of course the macroscopic level that people have had in mind when contemplating the apparently wheel-less natural world.

It will be instructive to make a brief diversion to consider what constraints might operate for a biological wheel design on a macroscale, and their general implications for the selection of complex systems. We can refer to a hypothetical macroscopic wheel-organ in a biological organism as a ‘macrobiowheel’, to distinguish it from true molecular-level rotary wheel-like systems. Although beyond the molecular scale, such an organ need not be large, and could in principle be associated with any multicellular animal. Such a postulated biological wheel structure could be used for locomotion in either terrestrial or aquatic environments, using rolling or propeller motion, respectively.

First there is a pseudo-example which should be noted. The animal phylum Rotifera encompasses the set of multicellular though microscopic ‘wheel animalcules’, rotifers, which superficially are characterized by a wheel-like locomotory organ in their aquatic environments. In fact, these ‘wheels’ are an illusory effect created by the sweeping motion of rings of cilia, and thus need not be considered further for the present purposes. Wheels of biological origin that can be unambiguously confirmed with the naked eye (or even a simple microscope) are thus conspicuous by their absence. Is this mere contingency, or strict necessity?

 Re-inventing the Wheel

Let’s consider what would be required to construct a macrobiowheel. Firstly, one would have to define what physical features are required – is the wheel structure analogous to bone or other biological organs composed of hard inorganic materials? The problem of how blood vessels and nerves could cross the gap between an axle and a wheel hub has been raised as a seeming insurmountable constraint – but with some imagination potential solutions could be conceived. For example, the axle and bearings could be bathed in a very narrow fluid-filled gap, where vessels on the other side of the gap take up nutrients and transport them to the rest of the living wheel structure (a heart-like pump within the wheel might be required to ensure the efficiency of this, depending on the size of the hypothetical animal). Transmission of nerve signals might be more problematic; perhaps the macrobiowheel could be insensate, although this would presumably be a disadvantage. Conceivably, the same fluid-filled gap could also act as a ‘giant synapse’ for nerve transmission, such that perception of the state of the wheel structure is received as a continuous whole, without discrimination as to specific local wheel regions. (This would thus alert an organism to a problem with its macrobiowheel organ without specifying which particular part is involved; a better arrangement than no information at all). Another possibility is the use of perturbations in local electric fields as a ‘remote’ sensing device, as used by a variety of animals, including the Australian platypus. The rotational motion for the ‘drive axle’ might be obtained from successive linear muscle-powered movements of structures coupled to the axle by gear-like projections.

No doubt much more could be said on this particular theme, but that will be unnecessary. The issue here is not to indulge in wild speculation, but to make the point that it is uncertain whether a biowheel of any scale at the macro-level is an impossibility purely from a biological systems viewpoint alone. So perhaps we could be so bold as to claim that with sufficient ingenuity of design, a true macrobiowheel could be assembled in a functional manner. But having acknowledged this, the formal possibility that a macrobiowheel could exist is not at all the same thing as the question of whether a feasible pathway could be envisaged for such a structure to emerge in terrestrial animals by natural selection. The potential problems to be addressed are (1) too large a jump in evolutionary ‘design space’ (across a fitness landscape) is required; (2) [along with (1)] no selective advantage of intermediate forms is apparent; (3) [along with (1) and (2)] the energy requirements for the system may be unfavorable compared with alternate designs such as conventional vertebrate limbs (consider the problem as noted above of the non-linkage of the macrobiowheel circulatory system from the rest of the organism).

The first problem, the ‘design space jump’ conundrum, implicitly states that a macromutation producing a functional macrobiowheel would be a practical impossibility. In the brief speculation as to how such a biological wheel might be constructed, it is quite clear that multiple novel processes would be required; the macrobiowheel would need to be supported by multiple novel subsystems. Where a macromutation producing any one such subsystem is exceedingly improbable, the chances of the entire package emerging at once is effectively zero. So it is one thing to design a complete and optimized macrobiowheel; to propose a pathway for evolutionary acquisition of this exotic feature we must also rationalize ‘intermediate’ structures with positive fitness attributes for the organism. Thus even if one of the postulated macromutations should amazingly appear, it would be useless for an evolutionary pathway leading to macrobiowheels unless a fitness advantage is conferred. (As always, natural selection cannot anticipate any potential advantage down the line, but adaptations selected for one function may be co-opted for other functions later in evolutionary time). A depiction of the constraints on evolution of macrobiowheels is presented in Fig. 1 below.

 

FitnessValleys

 

Fig. 1. Representations of fitness landscapes for evolution of a locomotion system for a multicellular organism. Here the vertical axes denote relative fitness of expressed phenotypes; different peaks represent distinct genotypes. In all cases, dotted lines indicate ‘moves’ to novel genotypes that are highly improbable (gray) or proscribed through transition to a state of reduced fitness (red). A. In this landscape it is assumed that a macrobiowheel is inherently biologically possible. In other words, for present purposes it is taken that there exists a genotype from which a macrobiowheel can be expressed as a functional phenotype. Yet such a genotype may not be accessible through evolutionary processes. The conclusion of A is that even though a biological construct corresponding to a macrobiowheel is possible, it cannot feasibly arise naturally, since it is effectively impossible to cross an intervening fitness ‘valley’ in a single jump (A to X; gray dotted line), and transition to intermediate forms cannot occur through their lowered fitness relative to any feasible starting point (A to B or C (gray-shaded); red dotted line). In turn, transitions from B or C to peak X (purple dotted lines) cannot take place. It is also implicit in this schema that no other feasible pathway to configurations B or C exist. Thus, configuration (genotype) X is a true case of unattainable or ‘irreducible’ complexity. B, Depiction of a conventional evolutionary pathway whereby the same starting point as in (A) transitions to an improved locomotory arrangement through intermediate forms of fitness benefits.

_____________________________________________________________________

So, ‘true irreducibility’ can result in principle from inability to create intermediate steps, universal pre-commitment to alternative design, or finally by an absolute incapacity to biologically support the proposed function. Also, the likelihood of a biological innovation acting as a fitness advantage is fundamentally dependent on the nature of the environment. Thus, with respect to our macrobiowheel musings, it has been pointed out that an absence of roads might counter any tendency for wheel-based locomotion to arise. It is not clear, though, whether an organism dwelling in an environment characterized by flat plains might benefit from wheel mobility, and in any case this issue is not relevant to macroscopic aquatic organisms and hypothetical wheel-like ‘biopropellers’ driven by rotary motion (as opposed to micro-scale real rotary bacterial flagella).

A Very Indirect Biological Route to Crossing Fitness Valleys

In a previous post concerning synthetic biology, it has already been noted that human ambitions for tinkering with biological molecules need not suffer from the same types of limitations which circumscribe the natural world. ….. So if a macrobiowheel is compatible with biological systems at all, humans with advanced biotechnologies could then in principle design and construct such a system. Such circumstances are schematically depicted in Fig. 2.

 

 

HumanAgency

 

Fig. 2. Potential role of human intervention in the generation of ‘unevolvable’ biological systems, as exemplified here with macrobiowheels. Here the natural fitness landscape of Fig. 1 (orange trace) has superimposed upon it peaks corresponding to biological constructs of human origin. Since the human synthetic biological approach circumvents loss of low-fitness forms through reproductive competition*, ‘intermediate’ forms all are depicted here as having equal fitness. Thus, by human agency, intermediate forms B and C can be used as synthetic stepping stones towards the final (macrobiowheel) product, despite their non-viability under natural conditions (Fig. 1). Alternatively, should it be feasible at both the design and synthetic levels, ‘direct’ assembly of a genome expressing the macrobiowheel structure might be attainable (direct arrow to the ‘X’ peak).

*Note that this presupposes that completely rational design could be instituted, although in reality artificial evolutionary processes might be used to achieve the desired results. But certainly no third-party competitors would be involved here.

_____________________________________________________________________

Construction of a macrobiowheel would serve to validate the hypothesis that such an entity is biologically possible. Also, demonstration of a final functional wheel-organ would greatly facilitate analysis of what pathways would have to followed if an equivalent structure was to evolve naturally. This would then consolidate the viewpoint that a true macrobiowheel is indeed biologically irreducibly complex. But since other structures and pathways might still exist, it would not serve as formal proof of the irreducibility stance in this case.

The ‘human agency’ inset of Fig. 2 has itself evolved from biological origins, just as for any other selectable attribute. Therefore, from a broad viewpoint, a biological development (human intelligence) can in itself afford an unprecedented pathway for the crossing of fitness valleys which otherwise would be naturally insurmountable. So whether we are speaking of exotica such as macrobiowheels or any other biological structures with truly ‘irreducible complexity’, then their existence could in principle be realized at some future time through the agency of advanced human synthetic biology. And given the current pace of scientific change, such times may arrive much sooner than many might believe.

 

Finally, we leave this theme with a relevant biopoly(verse) offering:

 

Biological paths may reveal

What evolution can thus make real

Yet beyond such constraints

And purist complaints

Could we make a true bio-based wheel?

 

 

References & Details

(In order of citation, giving some key references where appropriate, but not an exhaustive coverage of the literature).

‘….proponents of so-called ‘Intelligent Design….’     The ‘poster boy’ of ID is quite probably Michael Behe, of LeHigh University and the Discovery Institute. He is the author of Darwin’s Black Box – The Biochemical Challenge to Evolution (Free Press, 1996), and more recently The Edge of Evolution – The Search for the Limits of Darwinism (Simon & Schuster 2008).

‘…..all such ‘irreducibly complex’ examples proffered by such parties have been convincingly demolished…’     See Zuckerkandl 2006; also a National Academy of Sciences publication by a group of eminent biologists.

‘……a third possibility would exist for ID enthusiasts…..’     A personal perspective: A religious fundamentalist once asked me why there are no three-legged animals; he seemed to somehow think that their absence was evidence against evolution. Of course, the shoe is definitely on the other foot in this respect. If God created low-fitness animal forms that prevailed (among which tripedal animals would likely be included) , or fabulous creatures without any conceivable evolutionary precursors, then that in itself would be counted as ID evidence.

‘ Richard Dawkins has called it the Greatest Show on Earth.’    This refers to his book, The Greatest Show on Earth: The Evidence for Evolution. Free Press (2010).

Sean Carroll used words of Darwin himself, “endless forms most beautiful”.     The renowned developmental biologist Sean Carroll published a popular book entitled Endless Forms Most Beautiful – The New Science of Evo Devo, which gives a wonderful overview of the field of evolutionary development, or how the development of multicellular organisms from single cells to adult forms has been shaped by evolution. Darwin referred to “endless forms most beautiful” in the final section of The Origin of Species.

‘….the blind and cumulative hand of natural selection.’      This is not to say that the complete structure of biological entities, from genome to adult phenotype, is entirely a product of classical natural selection, but the latter process is of prime significance. For a very informative discussion of some of these issues, and the influence of non-adaptive factors in evolution, see Lynch 2007.

‘……Darwinian evolution has been proposed as a universal hallmark of life anywhere….’      For a cogent discussion of the NASA ‘evolutionary’ definition and related issues, see Benner 2010.

‘……the wheel, and this absence has been proposed as a possible true case of ‘irreducible complexity’  ‘     See Richard Dawkins’ The God Delusion, Bantam Press (2006).

‘…….the bacterial flagellum……’     For a description of the rotary flagellar motor, see Sowa et al. 2005; Sowa & Berry 2008.

‘……two component molecular motors of ATP synthase…..’      See Capaldi & Aggeler 2002; Oster & Wang 2003.

‘….animal phylum Rotifera……..’      See Baqui et al. (2000) for their rotifer site, which provides much general information and further references.

‘…….how blood vessels and nerves could cross the gap between an axle and a wheel hub has been raised as a seeming insurmountable constraint……’ | ‘….an absence of roads might counter any tendency for wheel-based locomotion to arise…..’      See again Dawkins’ The God Delusion, Bantam Press (2006).

‘……..the use of perturbations in local electric fields as a ‘remote’ sensing device, as used by a variety of animals, including the Australian platypus.’     For more background on electroreception, especially in the platypus, see Pettigrew 1999, and Pedraja et al. 2014.

‘……could exist is not at all the same thing as the question of whether a feasible pathway could be envisaged for such a structure to emerge ……. by natural selection.’ For an extension of this theme at the functional RNA level, see Dunn 2011.

Fig. 1. Representations of fitness landscapes…..’ Further discussion of evolutionary problems in surmounting fitness valleys can be found in Dunn 2009. The title of Dawkins’ book Climbing Mount Improbable (1997; W. W. Norton & Co) is in itself a fine metaphor for how cumulative selectable change can result in exquisite evolutionary ‘designs’, which of course is the major theme of the book.

‘……advanced human synthetic biology….’ The ongoing role of synthetic biology in testing a variety of possible biological scenarios was also discussed in a previous post under the umbrella term of ‘Kon-Tiki’ experiments.

Next Post: April.

Speed Matters: Biological Synthetic Rates and Their Significance

September 21, 2014

Previous posts from biopolyverse have grappled with the question of biological complexity (for example, see the post of January 2014). In addition, the immediate predecessor to the current post (April 2014) discussed the essential role of molecular alphabets in allowing the evolution of macromolecules, themselves a necessary precondition for the complexity requirements underlying functional biology as we understand it. Yet although molecular alphabets enable very large molecules to become the springboard for biological systems, another often overlooked factor in their synthesis exists, and that is the theme of the present post.

Initially, it will be useful to consider some aspects of the limitations on molecular size in living organisms.

 How Big is Big?

If it is accepted that biological complexity requires molecules of large sizes (as examined in the previous post), what determines the upper limits of such macromolecules? At the most fundamental level of chemistry, ultimately determined by the ability of carbon atoms to form concatenates of indefinite length, no direct constraints on biomolecular size appear to exist. In seeking examples to demonstrate this, we need look no further then the very large single duplex DNA molecules which constitute individual eukaryotic chromosomes. The wheat 3B chromosome is among the largest known of these, with almost a billion base pairs, and a corresponding molecular weight of around 6.6 x 1011 Daltons.

But in almost all known eukaryotic cases, an individual chromosome does not equate with genome size. In other words, a general rule is that it takes more than one chromosome to constitute even a haploid (single-copy) genome. Why then should not all genomes be composed of a single (very) long DNA string, rather than being constituted from separate chromosomal segments? And why should separate organisms differ so markedly in their chromosome numbers (karyotypes)? At least a part of an answer to this may come down to contingency, where alternative chromosomal arrangements may have been equally effective, but one specific configuration has become arbitrarily fixed during evolution of a given species. But certainly other factors must exist which are connected ultimately to molecular size. A DNA molecule of even ‘average’ chromosomal size in free solution would be an impractical prospect for containment within a cell nucleus of eukaryotic dimensions, unless it was ‘packaged’ in a manner such that its average molecular volume was significantly curtailed. And of course the DNA in natural chromosomes is indeed packaged into specific complexes with various proteins (particularly histones), and to a lesser extent RNA, termed chromatin.

Yet even a good packaging system must have its limits, and in this respect it is likely that selective pressures exist that act as restrictions on the largest chromosomal sizes. An extremely long chromosomal length may eventually reach a point where its functional efficiency is reduced, and organisms bearing such karyotypic configurations would be at a selective disadvantage.

No biological proteins can begin to rival the sheer molecular weights of chromosomal DNA molecules, but once again there is no fundamental law that prevents polypeptide chains from attaining an immense length, purely from a chemical point of view. Of course, proteins (in common with functional single-stranded RNA molecules) have a very significant constraint placed upon them relative to linear DNA duplexes. Biological proteins must fold into specific three-dimensional shapes even to attain solubility, let alone exhibit the astonishing range of functions which they can manifest. This folding is directed by primary amino acid sequence, and this dictate dramatically reduces the number of potentially useful forms which could arise from a polypeptide of even modest length. Yet since the largest proteins (such as titin, considered in the previous post) are composed of a series of joined modules, the ‘module-joining’ could in principle be extended indefinitely to produce proteins of gargantuan size.

So why not? Why aren’t proteins on average even bigger? Here one might recall a saying attributed to Einstein, “Keep things as simple as possible, but no simpler”, and repackage it into an evolutionary context. Although many caveats can be introduced, it is valid to note that evolutionary selection will tend to drive towards the most parsimonious ‘solutions’ to biological imperatives. Thus, the functions performed by proteins are usually satisfied by molecules which are large by the standards of small-molecule organic chemistry, but much smaller than titin-sized giants of nearly 30,000 amino acid residues. A larger version of an existing protein will require an increased energy expenditure for its synthesis, and therefore will be selected against unless it offered a counter-balancing significant advantage over the existing wild-type form.

So selective pressures ultimately deriving from the cellular energy balance-sheet will often favor smaller molecules, if they can successfully compete against larger alternatives. But another factor to note in this context – and this brings us to the major theme of this post – is the sheer time it takes to synthesize an exceedingly large molecule. Clearly, this synthetic time is itself determined by the maximal production rates which can be achieved by biochemical mechanisms available to an organism. Yet even with the most efficient systems, it is inevitable that eventually a molecular size threshold will be crossed where the synthetic time requirement becomes a negative fitness factor. In this logical scenario, a ‘megamolecule’ might provide a real fitness benefit, but lose competitiveness through the time lag required for its synthetic production relative to alternative smaller molecular forms.

These ‘drag’ effects of biosynthetic time requirements are not merely hypothetical, and can be relevant for chromosomal DNA replication, to briefly return to the same example as used above. Although as we have seen, chromosome length and number do not directly equate with genome size, as far as a cell is concerned, it is the entire genome that must be replicated before cell division can proceed. In this respect, it is notable that certain plants have genomes of such size that their genomic replication becomes a significant rate-limiting step in comparison to other related organisms.

Life in the Fast Lane

Let’s consider primordial replicative biosystems (perhaps pre-dating even the RNA World, and certainly the RNA-DNA-Protein World – see a previous post), where the machinery for replication of informational biomolecules is at a rudimentary stage of evolutionary development. In such a case, it can be proposed that an individual biosystem will selectively benefit from mutations in catalysts directing its own replication, where the mutational changes increase the efficiency and rate of replicative synthesis. This simply follows from the supposition that for biosystems A and B replicating in time t, if for one copy of B, n copies of A are made (where n > 1.0), then A systems will eventually predominate. Even very small positive values of n will still have the same end result. In principle, numerous factors could result in an enhancement of this n value, but here we are assuming that a simple increase in replicative rate would do the trick.

But improved replicative rates could also have an accelerating effect on early biosystem molecular evolution, by enabling the synthesis of larger molecular forms than were previously feasible. This assumes that a slow replication rate for essential biomolecular components of an early ‘living’ system would mean that its upper molecular size limits were much more constrained than for alternative ‘faster’ variants. Such a scenario could arise for any very long molecular concatenate whose replication rate was too slow to be an effective functional member of a simple co-operative molecular system. Faster replication rates would then be in effect enabling factors for increased molecular size, and in turn increased molecular complexity. Fig. 1 depicts this putative effect in two possible modes of operation.

 

BioSynthRates-Models

 

Fig. 1: Proposed effects of enhancement in synthetic rates as enabling factors for increased molecular size and complexity in early biosystems. Increased rates of biosynthesis leading to increased replicative rates in themselves provide a selective advantage (top panel). Yet it can also be considered that an acceleration of synthetic rate potential could also act as an enabling factor for increased potential molecular size, and in turn increasingly complex molecular structures. This might occur through ‘quantum leaps’ (bottom panel, A), where at certain crucial junctures a small rate increase has a large flow-on effect in terms of size enablement, or via a more continuous process (B), where rate increases are always associated with size and complexity enablement. In both cases, though, such effects could not occur indefinitely, owing to an increasing need for regulation of synthetic rates within complex biosystems.

___________________________________________________________________________

 

In a very simple replicative system, a single catalyst might determine the replication rate of all its individual components, and accordingly the replication speed of the system as a whole. But increasing catalytic replicative efficiency could become a victim of its own success as system complexity (associated with enhanced reproductive competitiveness) rises. In such cases, differential replicative rates of different components will determine system efficiency. It is both energetically wasteful and potentially a wrench in the works if system components only needed in several copies are made at the same level as components needed in hundreds of copies. Clearly, system regulation is needed in such circumstances, and without it, molecular replication enhancement is likely to be detrimental beyond a certain point. This eventuality is schematically depicted in Fig. 2.

 

 

ReplicativeSpeed&Regulat

 

Fig. 2: Proposed effect of introduced regulatory sub-systems on sustaining enhanced biosystem replicative rates. This suggests that even at the same replicative speed, a regulated system will be better off than an unregulated one; and that higher speeds may be permitted by tight regulation. But limits are placed even here. Absence of controlled regulation would probably apply only in the very earliest of emerging biosystems. In other words, the co-evolved regulation is likely to have been a fundamental feature of biosystem synthetic rates, since an imbalance between rates of production of the components of gene expression would be deleterious even in simple systems.

___________________________________________________________________________

 

Until this point, we have been considering replication of biosystem molecules in quite simplistic terms. In real systems of a biological nature, functional molecules undergo several levels of processing beyond their basic replicative synthesis. It is appropriate at this point to take a quick look at some of these.

 

Processing Levels and Biological Synthetic Speed

 

In even relatively simple bacterial cells, both RNA and protein molecules typically undergo extensive processing, in a variety of ways. And this trend is considerably more emphasized in complex eukaryotes. Although an in-depth discussion of such effects is beyond the scope of the present post, some of them (but by no means all) are listed in Table 1 below.

 

Biosynth-Stages&Rates-TABLE

 

Table 1. Levels of processing involving primary transcription or translation. These processes can be considered as secondary steps which are required for the complete maturation of biological macromolecules, varying by type and biological circumstances. Where several processing levels are necessary, any one of them is potentially a rate-limiting step for production of the final mature species. It should be noted that while some of these processes are near-universal (such as accurate protein folding following primary polypeptide chain expression), some are restricted to a relatively small subset of biological systems (such as protein splicing via inteins).

___________________________________________________________________________

One way of enhancing the overall production rates of biological macromolecules bearing modifications after primary transcription and translation is to couple processes together. For protein expression, mRNA transcription and maturation is itself a necessary initial step, and mRNA and protein synthesis are in fact coupled in prokaryotic cells. Where transcription and translation are so linked, a nascent RNA chain can interact with a ribosome for polypeptide translation initiation before transcription is complete.

In contrast, such transcriptional-translational coupling is not found in eukaryotic cells, where mature mRNAs are exported from the nucleus for translation via cytoplasmic ribosomes. Yet examples of ‘process coupling’ can certainly still be uncovered in complex eukaryotes, with a good example being the coupling of primary transcription with the removal of intervening sequences (introns) via splicing mechanisms mediated by the RNA-protein complexes termed spliceosomes.

The sheer complexity of the diverse processing events for macromolecular maturation in known biological systems serves to emphasize the above-noted point that regulation of the replication of biomolecules in general is far from a luxury, but an absolute pre-requisite. Before complex biosystems had any prospects of emerging in the first place, at least basic regulatory systems for replicative processes would necessarily have already been in place, in order to allow the smooth ‘meshing of parts’ which is part and parcel of life itself.

 Speed Trade-Offs and Regulation

There is certainly more than one way for a replicative system to run off the rails, like a metaphorical speeding locomotive, if increasing replicative rates are not accompanied by regulatory controls. A key factor which will inevitably become highly significant in this context is the replicative error rate, or replicative fidelity. ‘Copying’ at the molecular level would ideally be perfect, but this is no more attainable in an absolute sense than the proverbial perpetual motion machine, and for analogous entropic reasons. Thus, what a biosystem can gain in the roundabouts with an accentuated replication rate, it may lose in the swings with loss of replicative accuracy. The problem of fidelity, particularly with the replication of key informational DNA molecules, has been addressed up to a point by the evolution of proof-reading mechanisms (where DNA polymerases possess additional enzymatic capabilities for excising mismatched base-pairs), and DNA repair systems (where damaged DNA is physically restored to its original state, to avoid damage-related errors being passed on with the next replication round). Although such systems might seem obviously beneficial for an organism, there are trade-offs in such situations. Proof-reading may act as a brake on replicative speeds, and also comes at a significant energetic cost.

The complexities of regulatory needs also dictate that rates at some levels of biological synthesis are less than what could be achieved were the component ‘factories’ to be completely unfettered. A good example of this is the relative rate of translation in prokaryotes vs. eukaryotes, where the latter have a significantly slower rate of protein expression on ribosomes. It is highly likely that a major reason for this is the greater average domain complexity of eukaryotic proteins, which require a concomitantly longer time for correct folding to occur, usually as directed by protein chaperones. A striking confirmation of this, as well as a very useful application, has been to employ mutant ribosomes in E. coli with a slower expression rate. When this was done, significant enhancement of the folding of eukaryotic proteins was observed, to the point where proteins otherwise virtually untranslatable in E. coli could be successfully expressed.

Speed Limits In Force?

How can the rates of biological syntheses be slowed down? In principle, one could envisage a number of ways that this could be achieved. In one such process, the degeneracy of the genetic code (where a single amino acid is specified by more than one codon) has been exploited through evolutionary time as a means for ‘speed control’ in protein synthesis. Degenerate triplet ‘synonymous’ codons differ in the third ‘wobble’ positions. For example, the amino acid alanine is specified by four mRNA codons, GCA, GCG, GCC, and GCU. Where synonymous codons in mRNAs are recognized by specific subsets of transfer RNA (tRNA) molecules within the total tRNA group charged with the same amino acid, translational speed can be significantly influenced by the size of the relevant tRNA intracellular pools. To illustrate this in simplified form, consider a specific amino acid X with codons A, B, C, and D, where relevant tRNA molecules a, b, c, and d exist (such that when charged with the correct amino acid, tRNA-aX, tRNA-bX, tRNA-cX and tRNA-dX are formed). Here we arbitrarily assign tRNA-a and –b as mutually recognizing both the codons A and B, and likewise tRNA-c and –d as mutually recognizing the codons C and D. If the tRNA pools for the latter C and D codons are less than those for A and B codons, then the C / D synonymous codons are ‘slow’ in comparison with A and B. A known determinant of tRNA pool size (and thus in turn codon translational efficiency and speed) is the respective tRNA gene copy number. Thus, in this model, it would be predicted that the gene copy number for (A +B) would be significantly greater than for (C + D). Where there are selectable benefits in slowing down translation rates, the use of ‘slow’ codons is thus a useful strategy known to be pervasively applied in biology.

So, the initial and simplistic picture of ‘more is better’ which is logically applicable in very basic organized biosystems (Fig. 1) is not compatible with more advanced cellular systems. This must be kept in mind if we ask whether current biological synthetic rates could be accelerated across the board, either through natural evolution or artificial synthetic biological intervention. So much interlinking of distinct biological processes exists that it would seem difficult for evolutionary change itself to have much impact on synthetic rates in the most fundamental circumstances. Single mutations that accelerate a synthetic process will almost always fail to accommodate the global biosystem’s optimal requirements, and therefore elicit a fall in fitness. From this stance, fundamental synthetic rates would seem likely to be ‘locked in’ or ‘frozen’ by the need for each component of complex regulatory networks to be compatible with each other. Synthetic biology, on the other hand, is not necessarily limited in this way, but even here the would-be biological tinkerer would have to construct multiple changes in a biosystem at once. So global and fundamental changes in biological synthetic rates are not likely to be on the agenda in the near-term future.

To conclude, a biopoly(verse) appropriate for this post’s theme:

 

Let’s consider synthetic speed

As a potent driver, indeed

An organism’s fate

My come down to rate

The faster, the more it can breed

 

 

But recall the many caveats made above with respect to regulation…..

 

References & Details

(In order of citation, giving some key references where appropriate, but not an exhaustive coverage of the literature).

‘……..wheat 3B chromosome is among the largest known of these……….’     See Paux et al. 2008.

‘….in almost all known eukaryotic cases, an individual chromosome does not equate with genome size.’     The Australian ant Myrmecia pilosula (the ‘jack jumper’ ant) has been reported to have only a single chromosomal pair, such that somatic cells of haploid males bear only a single chromosome. See Crosland & Crozier 1986.

An extremely long chromosomal length may eventually reach a point where its functional efficiency is reduced, and organisms bearing such karyotypic configurations would be at a selective disadvantage.‘     The evolution of chromosome length cannot be studied without considering the role of non-coding DNA, which composes a large percentage of the total genomes of many organisms. By reducing the amounts of non-coding DNA tracts relative to coding sequences, chromosome number can be reduced without necessitating commensurately extended individual remaining chromosomes.

‘….the number of potentially useful forms which could arise from a polypeptide of even modest length….’     Even a small protein of 100 amino acid residues could in principle be composed of 20100 different sequences, for a protein of titin size the number is beyond hyper-astronomical (2026,926).

‘….titin-sized giants of nearly 30,000 amino acid residues….’       Titins and other very large proteins are found in muscle tissues, where they have a physical role as molecular ‘springs’ and fibers, or their attendant co-functionary species. It is presumed that in this specialized context, proteins of such extreme size were advantageous over possible alternatives with smaller macromolecules.

‘…..certain plants have genomes of such size that their genomic replication becomes a significant rate-limiting step…’    Here the plant Paris japonica with 1.3 x 1011 base pairs is the current place-holder, and has a concurrent slow growth rate. See a Science report by Elizabeth Pennisi.

‘….protein splicing via inteins….’     For a recent review and discussion of intein applications, see Volkmann & Mootz 2013.

‘……a good example being the coupling of primary transcription with the removal of intervening sequences (introns) via splicing mechanisms ……. ‘     See Lee & Tam 2013 for a recent review.

‘……such systems [proof-reading and repair] might seem obviously beneficial for an organism, there are trade-offs in such situations….’      It is also interesting to consider that a low but significant level of mutation is ‘good’ in evolutionary terms, in providing (in part, along with other mechanisms such as recombination) the raw material of genetic diversity upon which natural selection can act. But of course, this benefit is not foreseen by selection upon individual organisms: only immediately selectable factors such as metabolic costs are relevant in such contexts.

‘…..proof-reading mechanisms (where DNA polymerases possess additional enzymatic capabilities for excising mismatched base-pairs……’     Proof-reading DNA polymerases possess 3’-exonucleolytic activity that excises base mismatches, allowing correction re-insertion of the appropriate base.

‘……has been to employ mutant ribosomes in E. coli with a slower expression rate. ….. significant enhancement of the folding of eukaryotic proteins was observed….’      For this work, and a little more background on eukaryotic vs. prokaryotic expression, see Siller et al. 2010.

‘…..the degeneracy of the genetic code (where a single amino acid is specified by more than one codon) has been exploited through evolutionary time as a means for ‘speed control’….’      Different classes of eukaryotic proteins have different requirements for enforced ‘slow-downs’, and secreted and transmembrane proteins are major examples of those which benefit from such imposed rate controls. (See Mahlab & Linial 2014). Additional complications arise from the role of sequence context effects (local mRNA sequence environments), as noted in prokaryotes by Chevance et al. 2014. In E. coli, many specific synonymous codons can be removed and replaced with others with little apparent effect on fitness, but notable exceptions to this have been found. See in this respect the study by Lajoie et al. 2013.

 

Next post: January 2015.