Wednesday, October 26, 2011

An Open Letter to Ard Louis

Dear Professor Louis:

You have presented the view that from a theological perspective evolution is not objectionable. You explained, for instance, that we ought not to confuse mechanism with meaning. If the creator used evolution as a creation tool, that need not detract from the meaning of the creation. And you thoughtfully defended Leibniz’s arguments that occasional divine intervention demeans God’s craftsmanship and that God doesn’t do miracles to satisfy the wants of nature, but rather those of grace.

But have you considered what theology has to say about scientific realism? Solomon, for example, wrote that “It is the glory of God to conceal a matter, But the glory of kings is to search out a matter.” Is there not a theological mandate that science ought to adhere to the evidence and data?

I ask the question because evolution is so often at odds with the scientific evidence. For instance, science suggests that the evolution of even a single protein is highly unlikely. One study concluded that the number of evolutionary experiments required to evolve a protein is 10^70 while another study concluded that the maximum number of evolutionary experiments possible is only 10^43. So the number of evolutionary experiments required is 27 orders of magnitude greater than the number of evolutionary experiments possible.

By any reasonable measure a 27 order of magnitude shortfall is at least tantamount to “highly unlikely.” In fact, this estimate is conservative for several reasons. First, these studies were performed not by skeptics but by evolutionists.

Second, these studies were not carefully selected to magnify the problem but on the contrary, are optimistic. The conclusion that the number of evolutionary experiments required to evolve a protein is 10^70 was arrived at using only part of a protein and only part of its function was considered. Also, other pre existing proteins were used in the experiment.

And the conclusion that the number of evolutionary experiments possible is 10^43 was computed by making every assumption as optimistic as possible. The evolutionists computed a range of values, and 10^43 was the upper end of their range. It was computed assuming a four billion year time frame and assuming the preexistence of an earth full of bacteria. The time frame is two to three orders of magnitude too large (proteins must have evolved in a matter of millions, not billions, of years). And bacteria need thousands of, yes, proteins. So even to compute the number of evolutionary experiments available to evolve a protein, it was again necessary to assume the pre existence of proteins.

The evolutionists did provide a more conservative estimate of the number of evolutionary experiments possible, reducing the number from 10^43 to 10^21. This increases the evolutionary shortfall from 27 orders of magnitude to 49 orders of magnitude. But even in this more conservative estimate the evolutionists continued to use the four billion year time frame and the pre existence of bacteria (with their many thousands of pre existing proteins).

Therefore according to today’s science the evolution of even a single protein, by the evolutionist’s own reckoning, is unworkable. This is, of course, one particular example in a consistent trend. Science presents substantial problems with the theory of evolution. Is this not a matter we should search out?

Saturday, October 22, 2011

NT Wright Versus Karl Giberson

NT Wright has cogently argued that evolutionary thinking did not begin in 1859 and Darwin was not an intellectual revolutionary who single-handedly illuminated a new truth. In fact, the evolutionary foundation and framework were already in place “long before Darwin got in a boat and went anywhere.”

Wright is keenly aware that the origins debate and the greater science-religion landscape holds much more than the caricatured positions often presented. In the video below, Wright gives this solemn warning at the 3:04 mark:

Let’s put this thing on a broader canvas and let’s lighten up and have the proper discussion, instead of assuming that we already know, as soon as anyone mentions any scientific evidence for anything, “oh, they’re a Darwinian, they’re a liberal, they’re this that and the other.” Or, when somebody says they believe in God, “Oh, well you must be anti science then.” These are both trivial—actually childish reactions and we need to grow up.



Compare Wright’s wise cautionary words with Karl Giberson’s stereotypical attack on “Evangelicals” from the New York Times op-ed this week.

The rejection of science seems to be part of a politically monolithic red-state fundamentalism, textbook evidence of an unyielding ignorance on the part of the religious.

For Giberson, those who question the metaphysically-laden theory of evolution are guilty of a “rejection of science.” But Giberson sees hope, which in his world means some of those anti-intellectual fundamentalists are coming around to his position:

There are signs of change. Within the evangelical world, tensions have emerged between those who deny secular knowledge, and those who have kept up with it and integrated it with their faith. Almost all evangelical colleges employ faculty members with degrees from major research universities — a conduit for knowledge from the larger world. …

Scholars like Dr. Collins and Mr. Noll, and publications like Books & Culture, Sojourners and The Christian Century, offer an alternative to the self-anointed leaders. They recognize that the Bible does not condemn evolution and says next to nothing about gay marriage. They understand that Christian theology can incorporate Darwin’s insights and flourish in a pluralistic society.

Secular knowledge? For Giberson there is this thing called secular knowledge. It is a neutral, objective source of truth, free of metaphysical influence. It gives us things like “Darwin’s insights.” Then there is religious belief which must accommodate that “secular knowledge.” And of course to question evolution is to “deny secular knowledge.”

Giberson’s caricatures stand in stark contrast to Wright’s plea for more understanding. Ironically the Wright video above was produced by Giberson’s Biologos organization. Perhaps Giberson should watch it.

Friday, October 21, 2011

Gene Expression Evolution: Your Daily Teleology …

Here is a new paper that claims to show the rate of gene expression evolution in a range of different mammalian species. Of course the paper shows no such thing. What it does show are gene expression rates in extant species. And what they found is that those rates are all over the map. The rates are often similar, but in other cases the rates not only vary between species, they also vary between organs and even chromosomes. As usual, the evolutionists describe the findings using teleological language to cover over what evolution really says:

We show that the rate of gene expression evolution varies among organs, lineages and chromosomes, owing to differences in selective pressures: transcriptome change was slow in nervous tissues and rapid in testes, slower in rodents than in apes and monotremes, and rapid for the X chromosome right after its formation.

Of course there is no such thing as “selective pressure.” This phrase is commonly used to envision an active process that responds to environmental challenges. If natural selection shapes and designs the species according to need, then it sounds more plausible. In reality, all natural selection does is kill off the bad designs.

Although gene expression evolution in mammals was strongly shaped by purifying selection, …

Translation: The evolutionists found that many cases similar genes have similar expression rates.

we identify numerous potentially selectively driven expression switches, which occurred at different rates across lineages and tissues and which probably contributed to the specific organ biology of various mammals.

In other words, the evolutionists also found some similar genes that have significantly different expression rates. So the evolutionists must infer a new kind of evolution. Instead of mutations grinding away which, on rare occasion, provide a slightly better design (in terms of reproduction of course), the new kind of evolution states that the genes and their regulation mechanisms are generally already in place. What changes is their expression rates. So evolution created all these genes and regulation mechanisms, no knowing that it had just created the building blocks for all massive biological complexity. All that was needed was some expression rate changes.

Aside from being unlikely, evolution calls for massive serendipity. As usual, it is presented in teleological terms.

Transcription Factors: More Species-Specific Biology

Evolutionists say that molecular biology has provided resounding confirmations of the fact of evolution. But actually the new molecular data reveal many contradictions. Far from confirming evolution, molecular biology has revealed yet more problems with the “fact.” For example, we find variations between species that are at odds with evolutionary expectations. One such example is in the transcription factors—proteins that bind to DNA and influence which genes are expressed.

Last year I discussed a study of how transcription factor binding is not conserved between mice and men. Evolutionists were surprised to find that similar transcription factors in human and mouse embryonic stem cells bind in very different DNA locations. In fact, the binding sites are often so-called “lineage-specific,” meaning that the transcription factor binds to a section of DNA that is unique to that species. As one writer explained:

Remarkably, many of these RABS [repeat-associated binding sites] were found in lineage-specific repeat elements that are absent in the comparison species, suggesting that large numbers of binding sites arose more recently in evolution and may have rewired the regulatory architecture in embryonic stem cells on a substantial scale.

Rewired the regulatory architecture on a substantial scale? In other words, evolutionists must believe that although evolution had provided, in a common ancestor to mice and men, perfectly good transcription factors and perfectly good binding sites for those transcription factors to attach to, nonetheless this was all “rewired.”

They call this species-specific or lineage-specific biology, which simply means that evolution’s predictions don’t work. Evolution doesn’t help explain the findings, it is a gratuitous, unparsimonious layer added to the science. Rather then the theory guiding the science and elucidating the findings, it is the science that is contradicting the theory.

Such divergence between transcription factor binding sites even shows up in very similar species, such as different species of yeast. As one paper explains, “most of these sites have diverged across these species, far exceeding the interspecies variation in orthologous genes.” And as usual the evolutionists did not miss a step in reinventing their theory to accommodate the contradictory findings:

Transcription factor binding sites have therefore diverged substantially faster than ortholog content. Thus, gene regulation resulting from transcription factor binding is likely to be a major cause of divergence between related species.

In other words, the unexpected findings between different species become a causal mechanism for the evolutionists. The right mutations just happened to occur in both the genes that code for the transcription factors, and in the DNA to create new binding sites which just happened to drive evolutionary change to fantastic new designs. Saying this is unlikely is putting it kindly.

Another study found “large interspecies differences in transcriptional regulation” between different vertebrates which the evolutionists claimed “provide insight into regulatory evolution” and reveal “the evolutionary dynamics of transcription factor binding.” But of course the findings reveal no such thing. There were no “evolutionary dynamics” revealed by the transcription factor binding differences. That is a multiplied entity based on the dogma that evolution is a fact. Here is how one writer described the findings:

Researchers from Cambridge, Glasgow and Greece have discovered a remarkable amount of plasticity in how transcription factors, the proteins that bind to DNA to control the activation of genes, maintain their function over large evolutionary distances.

The text books tell us that transcription factors recognise the genes that they regulate by binding to short, sequence-specific lengths of DNA upstream or downstream of their target genes. It was widely assumed that, like the sequences of the genes themselves, these transcription factor binding sites would be highly conserved throughout evolution. However, this turns out not to be the case in mammals.

[…]

In all tested species, the transcription factors CEBPA and HNF4A are master regulators of liver-specific genes. By mapping the binding of CEBPA and HNF4A in the genomes of each species and comparing those maps, they found that in most cases neither the site nor the sequence of the transcription factor binding sites is conserved, yet despite this, these transcription factors still manage to regulate the largely conserved gene expression and function of liver tissue.

[…]

"By studying changes in transcription factor binding, we can understand the evolution of gene regulation," said Duncan Odom from Cancer Research UK Cambridge Research Institute and coauthor on the paper.

Once again the evolutionary textbooks are wrong, but no matter, evolution is a fact. Religion drives science and it matters.

Wednesday, October 19, 2011

New Research Continues to Point to a Super Progenitor

Everyone knows biology is full of complicated designs, but evolutionists think it arose spontaneously, as a result of the play of natural laws. In other words, it happened to happen. First there was nothing, then there was something, then that something became very complicated. All this just happened to happen.

There are many problems with this evolutionary narrative. One is that we can’t explain how such complexity could have arisen on its own. Another is that if evolution is true, then complexity must have somehow formed early in evolutionary history. In fact, evolutionists sometimes use this fact to dodge the failure of their idea. They say that immense complexities, such as molecular machines and codes, are not really a problem because they occurred so early in evolutionary history. That early history, these evolutionists say, falls under the origin of life (OOL) phase, not evolution proper. So with a wave of the hand, they dismiss major failures of their idea.

But the failure of the evolutionary expectation of simple beginnings will not go away so easily. One such example in the news is the last universal common ancestor (LUCA) to all life. If evolution is true, then this ancient progenitor of all life must have been extremely complex. Here is what I wrote ten years ago in my book Darwin’s God:

[T]he next step was to piece together what the progenitor would have looked like by comparing the genetic differences and similarities of the three lineages. But the task became confusing due to the wide variety of genes between and amongst the three lineages. No clear picture of a simple progenitor emerged. Instead the only solution seemed to be a super progenitor that already had most of the highly complex traits found in each of the three lineages. The super progenitor would have been as complex as modern cells yet would have somehow arisen in a short time.

This story has not changed and recent research continues to point to a mythical “super progenitor.”

Last Universal Common Ancestor More Complex Than Previously Thought

New evidence suggests that LUCA was a sophisticated organism
after all
, with a complex structure recognizable as a cell, researchers report. Their study appears in the journal Biology Direct.

The study builds on several years of research into a once-overlooked feature of microbial cells, a region with a high concentration of polyphosphate, a type of energy currency in cells. Researchers report that this polyphosphate storage site actually represents the first known universal organelle, a structure once thought to be absent from bacteria and their distantly related microbial cousins, the archaea. This organelle, the evidence indicates, is present in the three domains of life: bacteria, archaea and eukaryotes (plants, animals, fungi, algae and everything else).

The existence of an organelle in bacteria goes against the traditional definition of these organisms, said University of Illinois crop sciences professor Manfredo Seufferheld, who led the study.

"It was a dogma of microbiology that organelles weren't present in bacteria," he said. But in 2003 in a paper in the Journal of Biological Chemistry, Seufferheld and colleagues showed that the polyphosphate storage structure in bacteria (they analyzed an agrobacterium) was physically, chemically and functionally the same as an organelle called an acidocalcisome (uh-SID-oh-KAL-sih-zohm) found in many single-celled eukaryotes.

Their findings, the authors wrote, "suggest that acidocalcisomes arose before the prokaryotic (bacterial) and eukaryotic lineages diverged." The new study suggests that the origins of the organelle are even more ancient.

So even given evolutionary assumptions, this evidence indicates an early organelle and with it, early complexity.


"There are many possible scenarios that could explain this, but the best, the most parsimonious, the most likely would be that you had already the enzyme even before diversification started on Earth," said study co-author Gustavo Caetano-Anollés, a professor of crop sciences and an affiliate of the Institute for Genomic Biology at Illinois. "The protein was there to begin with and was then inherited into all emerging lineages."


But the evolution of even a single protein is astronomically unlikely, even according to evolutionist’s unrealistically optimistic assumptions.

The study lends support to a hypothesis that LUCA may have been more complex even than the simplest organisms alive today, said James Whitfield, a professor of entomology at Illinois and a co-author on the study.

"You can't assume that the whole story of life is just building and assembling things," Whitfield said. "Some have argued that the reason that bacteria are so simple is because they have to live in extreme environments and they have to reproduce extremely quickly. So they may actually be reduced versions of what was there originally. According to this view, they've become streamlined genetically and structurally from what they originally were like. We may have underestimated how complex this common ancestor actually was."

Early complexity is yet another example of evolutionary expectations gone wrong. Religion drives science, and it matters.

Tuesday, October 18, 2011

Two-Fold Fragile Codons and Amino Acids

If you read Brian Cusack’s paper (discussed here) you may have wondered why the evolutionists did not distinguish the two-fold fragile codons from the single-fold fragile codons. Perhaps I missed it, but I saw no mention of this distinction. The evolutionists define as “fragile” those codons that can be changed into a STOP codon with a single substitution. They are shaded gray in the table below (from Figure 1 in the paper):


For instance, TCG, which codes for the amino acid serine (Ser), becomes a STOP if the cytosine (C) at the second position is replaced with an adenine (A). As the table shows, there are 18 such codons (shaded in gray).

But of these 18 “fragile” codons, five of them are two-fold fragile. That is, there are two different substitutions that can change the codon into a STOP. The other 13 codons are single-fold fragile in that one and only one substitution can effect the change to a STOP.

The two-fold fragile codons are: TTA, TCA, TAT, TAC and TGG. The first two code for the amino acids leucine (Leu) and serine, respectively. The next two both code for tyrosine (Tyr), and the final one codes for tryptophan (Trp).

This leads to another distinction. The evolutionists also point out that there are six amino acids that are coded for exclusively by fragile codons. They are also shaded in gray in the figure (Tyr, Gln, Lys, Glu, Cys and Trp). These are the “fragile amino acids.”

But Tyr and Trp—both aromatic hydrophobics and the two largest amino acids—are coded for exclusively by two-fold fragile codons. We might call them the “two-fold fragile amino acids.”

The theme of the evolutionists’ paper is that these fragile codons appear less frequently where transcriptional error correction mechanisms are less effective. Fewer fragile codons means there will be fewer erroneous STOP signals in regions where they are more likely to go uncorrected.

All that makes sense, but why didn’t the evolutionists mention the two-fold fragile codons and the amino acids that are coded for exclusively by them? Would they not have expected the correlation with the correction mechanisms to be even stronger? Am I missing something, or could this be yet another case of evolutionary confirmation bias by the authors and reviewers?

Monday, October 17, 2011

Nature: Life is Complicated

According to the theory of evolution the biological world arose spontaneously. Evolutionists, however, prefer to use teleological language to explain how biology came to be. “Natural selection,” they explain, “has evolved a strategy.” Teleological language is needed to cover over the awkwardness of evolution. For evolutionists have no choice. Inspite of the data, evolutionists must dogmatically insist that they are right. It must be a fact that the most complex designs arose all by themselves. And complex they are. In fact, in spite of having been created by nothing and arising all by themselves, biological designs are too complex for our best scientists. Armies of the world’s smartest researchers using the most powerful supercomputers available still cannot figure out how it all works. As one article from last year explained, “Life is complicated.”

Non-coding DNA is crucial to biology, yet knowing that it is there hasn't made it any easier to understand what it does. "We fooled ourselves into thinking the genome was going to be a transparent blueprint, but it's not," says Mel Greaves, a cell biologist at the Institute of Cancer Research in Sutton, UK.

Instead, as sequencing and other new technologies spew forth data, the complexity of biology has seemed to grow by orders of magnitude.

Complexity growing by orders of magnitude? This is precisely what was not expected.

"It seems like we're climbing a mountain that keeps getting higher and higher," says Jennifer Doudna, a biochemist at the University of California, Berkeley. "The more we know, the more we realize there is to know."

Biologists have seen promises of simplicity before. The regulation of gene expression, for example, seemed more or less solved 50 years ago. In 1961, French biologists François Jacob and Jacques Monod proposed the idea that 'regulator' proteins bind to DNA to control the expression of genes. Five years later, American biochemist Walter Gilbert confirmed this model by discovering the lac repressor protein, which binds to DNA to control lactose metabolism in Escherichia coli bacteria. For the rest of the twentieth century, scientists expanded on the details of the model, but they were confident that they understood the basics. "The crux of regulation," says the 1997 genetics textbook Genes VI (Oxford Univ. Press), "is that a regulator gene codes for a regulator protein that controls transcription by binding to particular site(s) on DNA."

Indeed, for evolutionists all of biology is a fluke. It just happened to arise, so isn’t it simple?

Just one decade of post-genome biology has exploded that view. Biology's new glimpse at a universe of non-coding DNA — what used to be called 'junk' DNA — has been fascinating and befuddling. Researchers from an international collaborative project called the Encyclopedia of DNA Elements (ENCODE) showed that in a selected portion of the genome containing just a few per cent of protein-coding sequence, between 74% and 93% of DNA was transcribed into RNA.

Much non-coding DNA has a regulatory role; small RNAs of different varieties seem to control gene expression at the level of both DNA and RNA transcripts in ways that are still only beginning to become clear. "Just the sheer existence of these exotic regulators suggests that our understanding about the most basic things — such as how a cell turns on and off — is incredibly naive," says Joshua Plotkin, a mathematical biologist at the University of Pennsylvania in Philadelphia.

But wait, did not the non-coding DNA, like everything else, spontaneously arise by itself? How could we be so “incredibly naïve” about all of this?

Even for a single molecule, vast swathes of messy complexity arise. The protein p53, for example, was first discovered in 1979, and despite initially being misjudged as a cancer promoter, it soon gained notoriety as a tumour suppressor — a 'guardian of the genome' that stifles cancer growth by condemning genetically damaged cells to death. Few proteins have been studied more than p53, and it even commands its own meetings. Yet the p53 story has turned out to be immensely more complex than it seemed at first.

In 1990, several labs found that p53 binds directly to DNA to control transcription, supporting the traditional Jacob–Monod model of gene regulation. But as researchers broadened their understanding of gene regulation, they found more facets to p53. Just last year, Japanese researchers reported that p53 helps to process several varieties of small RNA that keep cell growth in check, revealing a mechanism by which the protein exerts its tumour-suppressing power.

Even before that, it was clear that p53 sat at the centre of a dynamic network of protein, chemical and genetic interactions. Researchers now know that p53 binds to thousands of sites in DNA, and some of these sites are thousands of base pairs away from any genes. It influences cell growth, death and structure and DNA repair. It also binds to numerous other proteins, which can modify its activity, and these protein–protein interactions can be tuned by the addition of chemical modifiers, such as phosphates and methyl groups. Through a process known as alternative splicing, p53 can take nine different forms, each of which has its own activities and chemical modifiers. Biologists are now realizing that p53 is also involved in processes beyond cancer, such as fertility and very early embryonic development. In fact, it seems wilfully ignorant to try to understand p53 on its own. Instead, biologists have shifted to studying the p53 network, as depicted in cartoons containing boxes, circles and arrows meant to symbolize its maze of interactions.

Did all of this spontanously arise via random biological change, such as caused by mutations?

"When we started out, the idea was that signalling pathways were fairly simple and linear," says Tony Pawson, a cell biologist at the University of Toronto in Ontario. "Now, we appreciate that the signalling information in cells is organized through networks of information rather than simple discrete pathways. It's infinitely more complex."

Evolution’s predictions routinely turn out to be false. Evolutionists are constantly surprised because their theory is always pointing in the wrong direction.

"In many cases you've got high-throughput projects going on, but much of the biology is still occurring on a small scale," says James Collins, a bioengineer at Boston University in Massachusetts. "We've made the mistake of equating the gathering of information with a corresponding increase in insight and understanding."

A new discipline — systems biology — was supposed to help scientists make sense of the complexity. The hope was that by cataloguing all the interactions in the p53 network, or in a cell, or between a group of cells, then plugging them into a computational model, biologists would glean insights about how biological systems behaved.

In the heady post-genome years, systems biologists started a long list of projects built on this strategy, attempting to model pieces of biology such as the yeast cell, E. coli, the liver and even the 'virtual human'. So far, all these attempts have run up against the same roadblock: there is no way to gather all the relevant data about each interaction included in the model.

In many cases, the models themselves quickly become so complex that they are unlikely to reveal insights about the system, degenerating instead into mazes of interactions that are simply exercises in cataloguing.

No way to gather all the relevant data about each interaction? But according to evolution the creative force behind biology is, well, nothing. So how could there be such mazes of interactions?

In retrospect, it was probably unrealistic to expect that charting out the biological interactions at a systems level would reveal systems-level properties, when many of the mechanisms and principles governing inter-and intracellular behaviour are still a mystery, says Leonid Kruglyak, a geneticist at Princeton University in New Jersey. He draws a comparison to physics: imagine building a particle accelerator such as the Large Hadron Collider without knowing anything about the underlying theories of quantum mechanics, quantum chromodynamics or relativity. "You would have all this stuff in your detector, and you would have no idea how to think about it, because it would involve processes that you didn't understand at all," says Kruglyak. "There is a certain amount of naivety to the idea that for any process — be it biology or weather prediction or anything else — you can simply take very large amounts of data and run a data-mining program and understand what is going on in a generic way."

From where did that “naivety” come, and will they learn from their mistakes? It seems not:

For example, transcription factors encoded in the urchin embryo's genome are first activated by maternal proteins. These embryonic factors, which are active for only a short time, trigger downstream transcription factors that interact in a positive feedback circuit to switch each other on permanently. Like the sea urchin, other organisms from fruitflies to humans organize development into 'modules' of genes, the interactions of which are largely isolated from one another, allowing evolution to tweak each module without compromising the integrity of the whole process. Development, in other words, follows similar rules in different species.

"The fundamental idea that the genomic regulatory system underlies all the events of development of the body plan, and that changes in it probably underlie the evolution of body plans, is a basic principle of biology that we didn't have before," says Davidson. That's a big step forwards from 1963, when Davidson started his first lab. Back then, he says, most theories of development were "manifestly useless".

In other words, the massive biological change change evolution brought about was based on the genomic regulatory system which, of course, evolution had just happened to have already created. That was fortunate.

This article is worth revisiting because the problem with all of this is not merely that mistakes were made. Nor is there anything wrong with a theory that doesn’t work out. This happens all the time in science. The problem is that evolution is driven by theological and philosophical convictions that won’t be overturned by scientific evidence. Religion drives science, and it matters.

Sunday, October 16, 2011

Brian Cusack’s Latest: Anti Parsimonious, Teleological, Petitio Principii, Cum Hoc Ergo Propter Hoc and Misrepresentations—Other Than That It’s Perfect

Or should we say, it is perfect, for creatively finding new ways to cram as many fallacies as possible into a single paper is precisely what “scientific” evolutionism seems to be all about. Cusack’s latest peer-reviewed contribution to the evolution literature, Preventing Dangerous Nonsense: Selection for Robustness to Transcriptional Error in Human Genes, is perfectly typical. But alas, due to the strict page limits of Darwin’s God, we are only able to provide a mere brief overview.

Background

When a gene is used to synthesize a protein, error checking and prevention is performed all along the way. An important and dangerous error is the so-called nonsense error in which the code for an amino acid is erroneously replaced with a stop signal. This causes the protein synthesis process to be halted in mid stream, leaving a half-baked and useless segment of protein. Cell’s have various processes to check for and correct such nonsense errors, but another way around the problem is to avoid the genetic coding that is particularly susceptible to nonsense errors.

Anti parsimonious

The main contribution of Cusack’s paper is its elucidation of how these correction and prevention mechanisms often complement each other nicely. In particular, the error correction mechanisms have their limitations. One of the correction mechanisms usually doesn’t work for genes that are written out in one, single continuous region. And for genes that are divided into several separate regions, that mechanism often doesn’t work for the final region.

It is in these particular regions—where the error correction is more limited—that the prevention is stronger. In these regions, the particular genetic coding that is susceptible to nonsense errors is diminished. It would be like having a spell-checker that cannot check a certain page, but that page doesn’t have any long words to begin with.

This and other examples need nothing more than common sense to understand. Looking at the design of the error correction and prevention mechanisms, it makes perfect sense that where the error correction is less effective, there would be more error prevention. Nonetheless, the evolutionists break every rule of parsimony to impose their evolutionary framework. They multiply entities and construct superfluous causes. From Occam to Einstein we know not to do this, but evolutionists must have their theory. Here are two examples from the paper:

Given the high rate of transcriptional errors in eukaryotes, we hypothesized that natural selection has promoted a dual strategy of “prevention and cure” to alleviate the problem of nonsense transcriptional errors. A prediction of this hypothesis is that [the error correction’s] inefficiency should leave a signature of “transcriptional robustness” in human gene sequences that reduces the frequency of nonsense transcriptional errors.

[…]

Interestingly, one group of genes falls entirely outside of the range of [the error correction’s] surveillance. Replication-dependent histones contain neither introns in their coding sequences nor polyA-tail in their mRNAs. Therefore, histone genes represent a blind-spot for both mammalian [the error correction] pathways. According to our hypothesis histone genes should represent the most transcriptionally robust genes in the mammalian genome since PTC-containing transcripts of their genes will not be recognized and degraded before translation.

Evolution adds nothing to the science here. These are yet more examples of how evolution is a gratuitous explanation, adding nothing but “multiplied entities” as Occam put it. We may as well say, with the Aristotelians, that fire is hot because it has the quality of heat.

Teleological

Evolution adds little to the science beyond gratuitous explanation, and furthermore that explanation is awkward. The theory states that the entire biological world just happened to arise all by itself.

Not surprisingly evolutionists never describe it this way. Nor do they use equally accurate but more detailed explanations, such as that blind mutations just happened to create complex, interdependent designs while natural selection killed off the bad designs. Such accurate explanations of the theory are not used because they make obvious the absurdity of the whole project.

Instead evolutionists craft clever explanations that cast evolution and its natural selection in the active role of a designer. The theory sounds so much more plausible when natural selection responds to a need by creating a new design. And so there is an underlying, latent Lamarckianism running through the evolution genre. Out of one side of their mouth they rail against teleology while from the other they appeal to it over and over. Here are typical examples from the paper:

we hypothesized that natural selection has promoted a dual strategy of “prevention and cure” to alleviate the problem of nonsense transcriptional errors.

[…]

Nonsense errors are potentially highly toxic for the cell, so natural selection has evolved a strategy called Nonsense Mediated Decay (NMD) to “cure” such errors.

[…]

Moreover, these “prevention and cure” strategies are used interchangeably

Natural selection has promoted a dual strategy to alleviate a problem? Strategies are used interchangeably? Of course evolutionists do not mean any of this to be true. Their teleology is rhetorical. They need it to avoid the literal.

Petitio principii

The evolutionists force-fit the evidence into their theory, and the fit isn’t very good. Cusack’s flawed thesis is that evolution predicts how the error correction and prevention methods complement each other. But as usual the project depends on the pre existence of biology’s wonders. In this case, the evolutionists believe that evolution just happened to create the genetic code, which conveniently just happened to have some stop signals.

Evolution also just happened to create genetic information, including stop signs at the appropriate places, and the incredible molecular machines to read, copy and translate that genetic information, and to stop at the stop signs.

But sometimes errors occurred which inserted stop signs somewhere in the middle of a copy of a gene. Fortunately, evolution just happened to create incredible molecular machinery and mechanisms to check for and correct for such errors. The likelihood of all (or any) of this happening is of course beyond ridiculous. The theory isn’t even wrong.

The only way to avoid evolution’s massive contradictions is simply to assume it is true. Having swallowed such lunacy the evolutionists are now in a position to declare that the new evidence is yet another fulfilled prediction of, yes, evolution. Evolution is true, therefore evolution is true.

Cum hoc ergo propter hoc

A common evolutionary fallacy is to confuse correlation with causation. In this case Cusack and the evolutionists find a good correlation between the correction and prevention mechanisms. Simply put, where the correction is weaker, the prevention is stronger. And so they assume the former is the cause of the latter via the evolutionary process:

We observe that single-exon genes have evolved to become robust to mistranscription, because they show a significant tendency to avoid fragile codons relative to robust codons when compared to multi-exon genes.

[…]

Depletion of fragile codons is due primarily to inactivity of EJC–dependent NMD but also to reduced efficiency of PABP–dependent NMD.

[…]

We show that variable NMD efficiency also leaves its signature in the coding sequences of human genes and in the amino-acid content of the proteins they encode.

When will evolutionists learn that correlation does not imply causation. The answer of course is that they will learn this only when they learn to stop corrupting science with their religious dogma. That may sound harsh, but that is precisely what evolutionists are doing. Their metaphysics mandates evolution to be true. Therefore such correlations must be assumed to be the result of evolutionary causation.

Misrepresentations

No evolutionary treatise would be complete without misrepresentations of the science. If there is any common thread to scientific evolutionism it is the very bizarre interpretations of the scientific evidence which, to put it kindly, amount to misrepresentations. Such misrepresentations run all though the genre, from the popular works on down to the technical papers. Consider these misrepresentations from Cusack’s paper:

In contrast, gene expression errors are not inherited and have tended to be disregarded in evolutionary studies. Here we show how human genes have evolved a mechanism to reduce the occurrence of a specific type of gene expression error—transcriptional errors that create premature STOP codons (so-called “nonsense errors”).

But of course the paper showed no such thing. It did not “show how human genes have evolved a mechanism …” That is an incredibly unlikely, religiously-driven hypothesis that makes little scientific sense. The paper continues:

Nonsense errors are potentially highly toxic for the cell, so natural selection has evolved a strategy called Nonsense Mediated Decay (NMD) to “cure” such errors. However this cure is inefficient. Here we describe how a preventative strategy of “transcriptional robustness” has evolved to decrease the frequency of nonsense errors.

These are yet more blatant misrepresentations of the science. The paper does not “describe how a preventative strategy of ‘transcriptional robustness’ has evolved to decrease the frequency of nonsense errors.” The paper not only did not describe how such a strategy evolved, it did not even show that it evolved.

Religion drives science, and it matters.