Skip to content

Speed Matters: Biological Synthetic Rates and Their Significance

September 21, 2014

Previous posts from biopolyverse have grappled with the question of biological complexity (for example, see the post of January 2014). In addition, the immediate predecessor to the current post (April 2014) discussed the essential role of molecular alphabets in allowing the evolution of macromolecules, themselves a necessary precondition for the complexity requirements underlying functional biology as we understand it. Yet although molecular alphabets enable very large molecules to become the springboard for biological systems, another often overlooked factor in their synthesis exists, and that is the theme of the present post.

Initially, it will be useful to consider some aspects of the limitations on molecular size in living organisms.

 How Big is Big?

If it is accepted that biological complexity requires molecules of large sizes (as examined in the previous post), what determines the upper limits of such macromolecules? At the most fundamental level of chemistry, ultimately determined by the ability of carbon atoms to form concatenates of indefinite length, no direct constraints on biomolecular size appear to exist. In seeking examples to demonstrate this, we need look no further then the very large single duplex DNA molecules which constitute individual eukaryotic chromosomes. The wheat 3B chromosome is among the largest known of these, with almost a billion base pairs, and a corresponding molecular weight of around 6.6 x 1011 Daltons.

But in almost all known eukaryotic cases, an individual chromosome does not equate with genome size. In other words, a general rule is that it takes more than one chromosome to constitute even a haploid (single-copy) genome. Why then should not all genomes be composed of a single (very) long DNA string, rather than being constituted from separate chromosomal segments? And why should separate organisms differ so markedly in their chromosome numbers (karyotypes)? At least a part of an answer to this may come down to contingency, where alternative chromosomal arrangements may have been equally effective, but one specific configuration has become arbitrarily fixed during evolution of a given species. But certainly other factors must exist which are connected ultimately to molecular size. A DNA molecule of even ‘average’ chromosomal size in free solution would be an impractical prospect for containment within a cell nucleus of eukaryotic dimensions, unless it was ‘packaged’ in a manner such that its average molecular volume was significantly curtailed. And of course the DNA in natural chromosomes is indeed packaged into specific complexes with various proteins (particularly histones), and to a lesser extent RNA, termed chromatin.

Yet even a good packaging system must have its limits, and in this respect it is likely that selective pressures exist that act as restrictions on the largest chromosomal sizes. An extremely long chromosomal length may eventually reach a point where its functional efficiency is reduced, and organisms bearing such karyotypic configurations would be at a selective disadvantage.

No biological proteins can begin to rival the sheer molecular weights of chromosomal DNA molecules, but once again there is no fundamental law that prevents polypeptide chains from attaining an immense length, purely from a chemical point of view. Of course, proteins (in common with functional single-stranded RNA molecules) have a very significant constraint placed upon them relative to linear DNA duplexes. Biological proteins must fold into specific three-dimensional shapes even to attain solubility, let alone exhibit the astonishing range of functions which they can manifest. This folding is directed by primary amino acid sequence, and this dictate dramatically reduces the number of potentially useful forms which could arise from a polypeptide of even modest length. Yet since the largest proteins (such as titin, considered in the previous post) are composed of a series of joined modules, the ‘module-joining’ could in principle be extended indefinitely to produce proteins of gargantuan size.

So why not? Why aren’t proteins on average even bigger? Here one might recall a saying attributed to Einstein, “Keep things as simple as possible, but no simpler”, and repackage it into an evolutionary context. Although many caveats can be introduced, it is valid to note that evolutionary selection will tend to drive towards the most parsimonious ‘solutions’ to biological imperatives. Thus, the functions performed by proteins are usually satisfied by molecules which are large by the standards of small-molecule organic chemistry, but much smaller than titin-sized giants of nearly 30,000 amino acid residues. A larger version of an existing protein will require an increased energy expenditure for its synthesis, and therefore will be selected against unless it offered a counter-balancing significant advantage over the existing wild-type form.

So selective pressures ultimately deriving from the cellular energy balance-sheet will often favor smaller molecules, if they can successfully compete against larger alternatives. But another factor to note in this context – and this brings us to the major theme of this post – is the sheer time it takes to synthesize an exceedingly large molecule. Clearly, this synthetic time is itself determined by the maximal production rates which can be achieved by biochemical mechanisms available to an organism. Yet even with the most efficient systems, it is inevitable that eventually a molecular size threshold will be crossed where the synthetic time requirement becomes a negative fitness factor. In this logical scenario, a ‘megamolecule’ might provide a real fitness benefit, but lose competitiveness through the time lag required for its synthetic production relative to alternative smaller molecular forms.

These ‘drag’ effects of biosynthetic time requirements are not merely hypothetical, and can be relevant for chromosomal DNA replication, to briefly return to the same example as used above. Although as we have seen, chromosome length and number do not directly equate with genome size, as far as a cell is concerned, it is the entire genome that must be replicated before cell division can proceed. In this respect, it is notable that certain plants have genomes of such size that their genomic replication becomes a significant rate-limiting step in comparison to other related organisms.

Life in the Fast Lane

Let’s consider primordial replicative biosystems (perhaps pre-dating even the RNA World, and certainly the RNA-DNA-Protein World – see a previous post), where the machinery for replication of informational biomolecules is at a rudimentary stage of evolutionary development. In such a case, it can be proposed that an individual biosystem will selectively benefit from mutations in catalysts directing its own replication, where the mutational changes increase the efficiency and rate of replicative synthesis. This simply follows from the supposition that for biosystems A and B replicating in time t, if for one copy of B, n copies of A are made (where n > 1.0), then A systems will eventually predominate. Even very small positive values of n will still have the same end result. In principle, numerous factors could result in an enhancement of this n value, but here we are assuming that a simple increase in replicative rate would do the trick.

But improved replicative rates could also have an accelerating effect on early biosystem molecular evolution, by enabling the synthesis of larger molecular forms than were previously feasible. This assumes that a slow replication rate for essential biomolecular components of an early ‘living’ system would mean that its upper molecular size limits were much more constrained than for alternative ‘faster’ variants. Such a scenario could arise for any very long molecular concatenate whose replication rate was too slow to be an effective functional member of a simple co-operative molecular system. Faster replication rates would then be in effect enabling factors for increased molecular size, and in turn increased molecular complexity. Fig. 1 depicts this putative effect in two possible modes of operation.




Fig. 1: Proposed effects of enhancement in synthetic rates as enabling factors for increased molecular size and complexity in early biosystems. Increased rates of biosynthesis leading to increased replicative rates in themselves provide a selective advantage (top panel). Yet it can also be considered that an acceleration of synthetic rate potential could also act as an enabling factor for increased potential molecular size, and in turn increasingly complex molecular structures. This might occur through ‘quantum leaps’ (bottom panel, A), where at certain crucial junctures a small rate increase has a large flow-on effect in terms of size enablement, or via a more continuous process (B), where rate increases are always associated with size and complexity enablement. In both cases, though, such effects could not occur indefinitely, owing to an increasing need for regulation of synthetic rates within complex biosystems.



In a very simple replicative system, a single catalyst might determine the replication rate of all its individual components, and accordingly the replication speed of the system as a whole. But increasing catalytic replicative efficiency could become a victim of its own success as system complexity (associated with enhanced reproductive competitiveness) rises. In such cases, differential replicative rates of different components will determine system efficiency. It is both energetically wasteful and potentially a wrench in the works if system components only needed in several copies are made at the same level as components needed in hundreds of copies. Clearly, system regulation is needed in such circumstances, and without it, molecular replication enhancement is likely to be detrimental beyond a certain point. This eventuality is schematically depicted in Fig. 2.





Fig. 2: Proposed effect of introduced regulatory sub-systems on sustaining enhanced biosystem replicative rates. This suggests that even at the same replicative speed, a regulated system will be better off than an unregulated one; and that higher speeds may be permitted by tight regulation. But limits are placed even here. Absence of controlled regulation would probably apply only in the very earliest of emerging biosystems. In other words, the co-evolved regulation is likely to have been a fundamental feature of biosystem synthetic rates, since an imbalance between rates of production of the components of gene expression would be deleterious even in simple systems.



Until this point, we have been considering replication of biosystem molecules in quite simplistic terms. In real systems of a biological nature, functional molecules undergo several levels of processing beyond their basic replicative synthesis. It is appropriate at this point to take a quick look at some of these.


Processing Levels and Biological Synthetic Speed


In even relatively simple bacterial cells, both RNA and protein molecules typically undergo extensive processing, in a variety of ways. And this trend is considerably more emphasized in complex eukaryotes. Although an in-depth discussion of such effects is beyond the scope of the present post, some of them (but by no means all) are listed in Table 1 below.




Table 1. Levels of processing involving primary transcription or translation. These processes can be considered as secondary steps which are required for the complete maturation of biological macromolecules, varying by type and biological circumstances. Where several processing levels are necessary, any one of them is potentially a rate-limiting step for production of the final mature species. It should be noted that while some of these processes are near-universal (such as accurate protein folding following primary polypeptide chain expression), some are restricted to a relatively small subset of biological systems (such as protein splicing via inteins).


One way of enhancing the overall production rates of biological macromolecules bearing modifications after primary transcription and translation is to couple processes together. For protein expression, mRNA transcription and maturation is itself a necessary initial step, and mRNA and protein synthesis are in fact coupled in prokaryotic cells. Where transcription and translation are so linked, a nascent RNA chain can interact with a ribosome for polypeptide translation initiation before transcription is complete.

In contrast, such transcriptional-translational coupling is not found in eukaryotic cells, where mature mRNAs are exported from the nucleus for translation via cytoplasmic ribosomes. Yet examples of ‘process coupling’ can certainly still be uncovered in complex eukaryotes, with a good example being the coupling of primary transcription with the removal of intervening sequences (introns) via splicing mechanisms mediated by the RNA-protein complexes termed spliceosomes.

The sheer complexity of the diverse processing events for macromolecular maturation in known biological systems serves to emphasize the above-noted point that regulation of the replication of biomolecules in general is far from a luxury, but an absolute pre-requisite. Before complex biosystems had any prospects of emerging in the first place, at least basic regulatory systems for replicative processes would necessarily have already been in place, in order to allow the smooth ‘meshing of parts’ which is part and parcel of life itself.

 Speed Trade-Offs and Regulation

There is certainly more than one way for a replicative system to run off the rails, like a metaphorical speeding locomotive, if increasing replicative rates are not accompanied by regulatory controls. A key factor which will inevitably become highly significant in this context is the replicative error rate, or replicative fidelity. ‘Copying’ at the molecular level would ideally be perfect, but this is no more attainable in an absolute sense than the proverbial perpetual motion machine, and for analogous entropic reasons. Thus, what a biosystem can gain in the roundabouts with an accentuated replication rate, it may lose in the swings with loss of replicative accuracy. The problem of fidelity, particularly with the replication of key informational DNA molecules, has been addressed up to a point by the evolution of proof-reading mechanisms (where DNA polymerases possess additional enzymatic capabilities for excising mismatched base-pairs), and DNA repair systems (where damaged DNA is physically restored to its original state, to avoid damage-related errors being passed on with the next replication round). Although such systems might seem obviously beneficial for an organism, there are trade-offs in such situations. Proof-reading may act as a brake on replicative speeds, and also comes at a significant energetic cost.

The complexities of regulatory needs also dictate that rates at some levels of biological synthesis are less than what could be achieved were the component ‘factories’ to be completely unfettered. A good example of this is the relative rate of translation in prokaryotes vs. eukaryotes, where the latter have a significantly slower rate of protein expression on ribosomes. It is highly likely that a major reason for this is the greater average domain complexity of eukaryotic proteins, which require a concomitantly longer time for correct folding to occur, usually as directed by protein chaperones. A striking confirmation of this, as well as a very useful application, has been to employ mutant ribosomes in E. coli with a slower expression rate. When this was done, significant enhancement of the folding of eukaryotic proteins was observed, to the point where proteins otherwise virtually untranslatable in E. coli could be successfully expressed.

Speed Limits In Force?

How can the rates of biological syntheses be slowed down? In principle, one could envisage a number of ways that this could be achieved. In one such process, the degeneracy of the genetic code (where a single amino acid is specified by more than one codon) has been exploited through evolutionary time as a means for ‘speed control’ in protein synthesis. Degenerate triplet ‘synonymous’ codons differ in the third ‘wobble’ positions. For example, the amino acid alanine is specified by four mRNA codons, GCA, GCG, GCC, and GCU. Where synonymous codons in mRNAs are recognized by specific subsets of transfer RNA (tRNA) molecules within the total tRNA group charged with the same amino acid, translational speed can be significantly influenced by the size of the relevant tRNA intracellular pools. To illustrate this in simplified form, consider a specific amino acid X with codons A, B, C, and D, where relevant tRNA molecules a, b, c, and d exist (such that when charged with the correct amino acid, tRNA-aX, tRNA-bX, tRNA-cX and tRNA-dX are formed). Here we arbitrarily assign tRNA-a and –b as mutually recognizing both the codons A and B, and likewise tRNA-c and –d as mutually recognizing the codons C and D. If the tRNA pools for the latter C and D codons are less than those for A and B codons, then the C / D synonymous codons are ‘slow’ in comparison with A and B. A known determinant of tRNA pool size (and thus in turn codon translational efficiency and speed) is the respective tRNA gene copy number. Thus, in this model, it would be predicted that the gene copy number for (A +B) would be significantly greater than for (C + D). Where there are selectable benefits in slowing down translation rates, the use of ‘slow’ codons is thus a useful strategy known to be pervasively applied in biology.

So, the initial and simplistic picture of ‘more is better’ which is logically applicable in very basic organized biosystems (Fig. 1) is not compatible with more advanced cellular systems. This must be kept in mind if we ask whether current biological synthetic rates could be accelerated across the board, either through natural evolution or artificial synthetic biological intervention. So much interlinking of distinct biological processes exists that it would seem difficult for evolutionary change itself to have much impact on synthetic rates in the most fundamental circumstances. Single mutations that accelerate a synthetic process will almost always fail to accommodate the global biosystem’s optimal requirements, and therefore elicit a fall in fitness. From this stance, fundamental synthetic rates would seem likely to be ‘locked in’ or ‘frozen’ by the need for each component of complex regulatory networks to be compatible with each other. Synthetic biology, on the other hand, is not necessarily limited in this way, but even here the would-be biological tinkerer would have to construct multiple changes in a biosystem at once. So global and fundamental changes in biological synthetic rates are not likely to be on the agenda in the near-term future.

To conclude, a biopoly(verse) appropriate for this post’s theme:


Let’s consider synthetic speed

As a potent driver, indeed

An organism’s fate

My come down to rate

The faster, the more it can breed



But recall the many caveats made above with respect to regulation…..


References & Details

(In order of citation, giving some key references where appropriate, but not an exhaustive coverage of the literature).

‘……..wheat 3B chromosome is among the largest known of these……….’     See Paux et al. 2008.

‘….in almost all known eukaryotic cases, an individual chromosome does not equate with genome size.’     The Australian ant Myrmecia pilosula (the ‘jack jumper’ ant) has been reported to have only a single chromosomal pair, such that somatic cells of haploid males bear only a single chromosome. See Crosland & Crozier 1986.

An extremely long chromosomal length may eventually reach a point where its functional efficiency is reduced, and organisms bearing such karyotypic configurations would be at a selective disadvantage.‘     The evolution of chromosome length cannot be studied without considering the role of non-coding DNA, which composes a large percentage of the total genomes of many organisms. By reducing the amounts of non-coding DNA tracts relative to coding sequences, chromosome number can be reduced without necessitating commensurately extended individual remaining chromosomes.

‘….the number of potentially useful forms which could arise from a polypeptide of even modest length….’     Even a small protein of 100 amino acid residues could in principle be composed of 20100 different sequences, for a protein of titin size the number is beyond hyper-astronomical (2026,926).

‘….titin-sized giants of nearly 30,000 amino acid residues….’       Titins and other very large proteins are found in muscle tissues, where they have a physical role as molecular ‘springs’ and fibers, or their attendant co-functionary species. It is presumed that in this specialized context, proteins of such extreme size were advantageous over possible alternatives with smaller macromolecules.

‘…..certain plants have genomes of such size that their genomic replication becomes a significant rate-limiting step…’    Here the plant Paris japonica with 1.3 x 1011 base pairs is the current place-holder, and has a concurrent slow growth rate. See a Science report by Elizabeth Pennisi.

‘….protein splicing via inteins….’     For a recent review and discussion of intein applications, see Volkmann & Mootz 2013.

‘……a good example being the coupling of primary transcription with the removal of intervening sequences (introns) via splicing mechanisms ……. ‘     See Lee & Tam 2013 for a recent review.

‘……such systems [proof-reading and repair] might seem obviously beneficial for an organism, there are trade-offs in such situations….’      It is also interesting to consider that a low but significant level of mutation is ‘good’ in evolutionary terms, in providing (in part, along with other mechanisms such as recombination) the raw material of genetic diversity upon which natural selection can act. But of course, this benefit is not foreseen by selection upon individual organisms: only immediately selectable factors such as metabolic costs are relevant in such contexts.

‘…..proof-reading mechanisms (where DNA polymerases possess additional enzymatic capabilities for excising mismatched base-pairs……’     Proof-reading DNA polymerases possess 3’-exonucleolytic activity that excises base mismatches, allowing correction re-insertion of the appropriate base.

‘……has been to employ mutant ribosomes in E. coli with a slower expression rate. ….. significant enhancement of the folding of eukaryotic proteins was observed….’      For this work, and a little more background on eukaryotic vs. prokaryotic expression, see Siller et al. 2010.

‘…..the degeneracy of the genetic code (where a single amino acid is specified by more than one codon) has been exploited through evolutionary time as a means for ‘speed control’….’      Different classes of eukaryotic proteins have different requirements for enforced ‘slow-downs’, and secreted and transmembrane proteins are major examples of those which benefit from such imposed rate controls. (See Mahlab & Linial 2014). Additional complications arise from the role of sequence context effects (local mRNA sequence environments), as noted in prokaryotes by Chevance et al. 2014. In E. coli, many specific synonymous codons can be removed and replaced with others with little apparent effect on fitness, but notable exceptions to this have been found. See in this respect the study by Lajoie et al. 2013.


Next post: January 2015.


No comments yet

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s