Thursday, July 26, 2018

What is a Dependency Graph?

Information Organization

A recent paper, authored by Winston Ewert, uses a dependency graph approach to model the relationships between the species. This idea is inspired by computer science which makes great use of dependency graphs.

Complicated software applications typically use a wealth of lower level software routines. These routines have been developed, tested, and stored in modules for use by higher level applications. When this happens the application inherits the lower-level software and has a dependency on that module.

Such applications are written in human-readable languages such as Java. They then need to be translated into machine language. The compiler tool performs the translation, and the build tool assembles the result, along with the lower level routines, into an executable program. These tools use dependency graphs to model the software, essentially building a design diagram, or blueprint which shows the dependencies, specifying the different software modules that will be needed, and how they are connected together.

Dependency graphs also help with software design. Because they provide a blueprint of the software architecture, they are helpful in designing decoupled architectures and promoting software reuse.

Dependency graphs are also used by so-called “DevOps” teams to assist at deployment time in sequencing and installing the correct modules.

What Ewert has shown is that, as with computer applications which inherit software from a diverse range of lower-level modules, and those lower-level modules likewise feed into a diverse range of applications, biology’s genomes likewise reveal such patterns. Genomes may inherit molecular sequence information from a wide range of genetic modules, and genetic modules may feed into a diverse range of genomes.

Superficially, from a distance, this may appear as the traditional evolutionary tree. But that model has failed repeatedly as scientists have studied the characters of species more closely. Dependency graphs, on the other hand, provide a far superior model of the relationships between the species, and their genetic information flow.

Thursday, July 19, 2018

New Paper Demonstrates Superiority of Design Model

Ten Thousand Bits?

Did you know Mars is going backwards? For the past few weeks, and for several weeks to come, Mars is in its retrograde motion phase. If you chart its position each night against the background stars, you will see it pause, reverse direction, pause again, and then get going again in its normal direction. And did you further know that retrograde motion helped to cause a revolution? Two millennia ago, Aristotelian physics dictated that the Earth was at the center of the universe. Aristarchus’ heliocentric model, which put the Sun at the center, fell out of favor. But what Aristotle’s geocentrism failed to explain was retrograde motion. If the planets are revolving about the Earth, then why do they sometimes pause, and reverse direction? That problem fell to Ptolemy, and the lessons learned are still important today.

Ptolemy explained anomalies such as retrograde motion with additional mechanisms, such as epicycles, while maintaining the circular motion that, as everyone knew, must be the basis of all motion in the cosmos. With less than a hundred epicycles, he was able to model, and predict accurately the motions of the cosmos. But that accuracy came at a cost—a highly complicated model.

In the Middle Ages William of Occam pointed out that scientific theories ought to strive for simplicity, or parsimony. This may have been one of the factors that drove Copernicus to resurrect Aristarchus’ heliocentric model. Copernicus preserved the required circular motion, but by switching to a sun-centered model, he was able to reduce greatly the number of additional mechanisms, such as epicycles.

Both Ptolemy’s and Copernicus’ models accurately forecast celestial motion. But Copernicus was more parsimonious. A better model had been found.

Kepler proposed ellipses, and showed that the heliocentric model could become even simpler. It was not well accepted though because, as everyone knew, celestial bodies travel in circles. How foolish to think they would travel along elliptical paths. That next step toward greater parsimony would have to wait for the likes of Newton, who showed that Kepler’s ellipses were dictated by his new, highly parsimonious, physics. Newton described a simple, universal, gravitational law. Newton’s gravitational force would produce an acceleration, which could maintain orbital motion in the cosmos.

But was there really a gravitational force? It was proportional to the mass of the object which was then cancelled out to compute the acceleration. Why not have gravity cause an acceleration straightaway?

Centuries later Einstein reported on a man in Berlin who fell out of a window. The man didn’t feel anything until he hit the ground! Einstein removed the gravitational force and made the physics even simpler yet.

The point here is that the accuracy of a scientific theory, by itself, means very little. It must be considered along with parsimony. This lesson is important today in this age of Big Data. Analysts know that a model can always be made more accurate by adding more terms. But are those additional terms meaningful, or are they merely epicycles? It looks good to drive the modeling error down to zero by adding terms, but when used to make future forecasts, such models perform worse.

There is a very real penalty for adding terms and violating Occam’s Razor, and today advanced algorithms are available for weighing the tradeoff between model accuracy and model parsimony.

This brings us to common descent, a popular theory for modeling relationships between the species. As we have discussed many times here, common descent fails to model the species, and a great many additional mechanisms—biological epicycles—are required to fit the data.

And just as cosmology has seen a stream of ever improving models, the biological models can also improve. This week a very important model has been proposed in a new paper, authored by Winston Ewert, in the Bio-Complexity journal.

Inspired by computer software, Ewert’s approach models the species as sharing modules which are related by a dependency graph. This useful model in computer science also works well in modeling the species. To evaluate this hypothesis, Ewert uses three types of data, and evaluates how probable they are (accounting for parsimony as well as fit accuracy) using three models.

Ewert’s three types of data are: (i) Sample computer software, (ii) simulated species data generated from evolutionary / common descent computer algorithms, and (iii) actual, real species data.

Ewert’s three models are: (i) A null model in which entails no relationships between
any species, (ii) an evolutionary / common descent model, and (iii) a dependency graph model.

Ewert’s results are a Copernican Revolution moment. First, for the sample computer software data, not surprisingly the null model performed poorly. Computer software is highly organized, and there are relationships between different computer programs, and how they draw from foundational software libraries. But comparing the common descent and dependency graph models, the latter performs far better at modeling the software “species.” In other words, the design and development of computer software is far better described and modeled by a dependency graph than by a common descent tree.

Second, for the simulated species data generated with a common descent algorithm, it is not surprising that the common descent model was far superior to the dependency graph. That would be true by definition, and serves to validate Ewert’s approach. Common descent is the best model for the data generated by a common descent process.

Third, for the actual, real species data, the dependency graph model is astronomically superior compared to the common descent model.

Let me repeat that in case the point did not sink in. Where it counted, common descent failed compared to the dependency graph model. The other data types served as useful checks, but for the data that mattered—the actual, real, biological species data—the results were unambiguous.

Ewert amassed a total of nine massive genetic databases. In every single one, without exception, the dependency graph model surpassed common descent.

Darwin could never have even dreamt of a test on such a massive scale.

Darwin also could never have dreamt of the sheer magnitude of the failure of his theory. Because you see, Ewert’s results do not reveal two competitive models with one model edging out the other.

We are not talking about a few decimal points difference. For one of the data sets (HomoloGene), the dependency graph model was superior to common descent by a factor of 10,064. The comparison of the two models yielded a preference for the dependency graph model of greater than ten thousand.

Ten thousand is a big number.

But it gets worse, much worse.

Ewert used Bayesian model selection which compares the probability of the data set given the hypothetical models. In other words, given the model (dependency graph or common descent), what is the probability of this particular data set? Bayesian model selection compares the two models by dividing these two conditional probabilities. The so-called Bayes factor is the quotient yielded by this division.

The problem is that the common descent model is so incredibly inferior to the dependency graph model that the Bayes factor cannot be typed out. In other words, the probability of the data set given the dependency graph model, is so much greater than the probability of the data set given the common descent model, that we cannot type the quotient of their division.

Instead, Ewert reports the logarithm of the number. Remember logarithms? Remember how 2 really means 100, 3 means 1,000, and so forth?

Unbelievably, the 10,064 value is the logarithm (base value of 2) of the quotient! In other words, the probability of the data on the dependency graph model is so much greater than that given the common descent model, we need logarithms even to type it out. If you tried to type out the plain number, you would have to type a 1 followed by more than 3,000 zeros!

That’s the ratio of how probable the data are on these two models!

By using a base value of 2 in the logarithm we express the Bayes factor in bits. So the conditional probability for the dependency graph model has a 10,064 advantage of that of common descent.

10,064 bits is far, far from the range in which one might actually consider the lesser model. See, for example, the Bayes factor Wikipedia page, which explains that a Bayes factor of 3.3 bits provides “substantial” evidence for a model, 5.0 bits provides “strong” evidence, and 6.6 bits provides “decisive” evidence.

This is ridiculous. 6.6 bits is considered to provide “decisive” evidence, and when the dependency graph model case is compared to comment descent case, we get 10,064 bits.

But it gets worse.

The problem with all of this is that the Bayes factor of 10,064 bits for the HomoloGene data set is the very best case for common descent. For the other eight data sets, the Bayes factors range from 40,967 to 515,450.

In other words, while 6.6 bits would be considered to provide “decisive” evidence for the dependency graph model, the actual, real, biological data provide Bayes factors of 10,064 on up to 515,450.

We have known for a long time that common descent has failed hard. In Ewert’s new paper, we now have detailed, quantitative results demonstrating this. And Ewert provides a new model, with a far superior fit to the data.

Saturday, June 30, 2018

John Farrell Versus Isaac Newton

Guess Who Wins?

The title of John Farrell’s article in Commonweal from earlier this year is a dead giveaway. When writing about the interaction between faith and science, as Farrell does in the piece, the title “The Conflict Continues” is like a flashing red light that the mythological Warfare Thesis is coming at you.

Sure enough, Farrell does not disappoint. He informs his readers that the fear that science could “make God seem unnecessary” is “widespread today among religious believers,” particularly in the US where “opposition to belief in evolution remains very high.”

Indeed, this fear has “haunted the debate over the tension between religion and science for centuries.” Farrell notes that Edward Larson and Michael Ruse point out in their new book On Faith and Science, that the “conflict model doesn’t work so well. But that seems to be a minor speed bump for Farrell. He finds that:

The idea that the world operates according to its own laws and regularities remains controversial in the evolution debate today, as Intelligent Design proponents attack the consensus of science on Darwinian evolution and insist that God’s direct intervention in the history of life can be scientifically demonstrated.

Farrell also writes that Isaac Newton, driven by concerns about secondary causes, “insisted God was still necessary to occasionally tweak the motions of the planets if any threatened to wander off course.”

Farrell’s piece is riddled with myths. Secondary causes are not nearly as controversial as he would have us believe. He utterly mischaracterizes ID, and Newton said no such thing. It is true that Newton suggested that the Creator could intervene in the cosmos (not “insisted”).

And was this the result of some radical voluntarism?

Of course not. Newton suggested God may intervene in the cosmos because the physics of the day (which by the way he invented), indicated that our solar system could occasionally have instabilities. The fact that was running along just fine, and hadn’t yet blown up, suggested that something had intervened along the way.

Newton was arguing from science, not religion. But that doesn’t fit the Epicurean mythos that religion opposes naturalism while science confirms it. The reality is, of course, the exact opposite.

Sunday, May 20, 2018

New Paper Admits Failure of Evolution

Pop Quiz: Who Said It?

There are many fundamental problems with evolutionary theory. Origin of life studies have dramatically failed. Incredibly complex biological designs, both morphological and molecular, arose abruptly with far too little time to have evolved. The concept of punctuated equilibrium is descriptive, not explanatory. For example, the Cambrian Explosion is not explained by evolution and, in general, evolutionary mechanisms are inadequate to explain the emergence of new traits, body plans and new physiologies. Even a single gene is beyond the reach of evolutionary mechanisms. In fact, the complexity and sophistication of life cannot originate from non-biological matter under any scenario, over any expanse of space and time, however vast. On the other hand, the arch enemy of evolutionary theory, Lamarckian inheritance, in its variety of forms, is well established by the science.

Another Darwin’s God post?

No, these scientific observations are laid out in a new peer-reviewed, scientific paper.

Origin of Life

Regarding origin of life studies, which try to explain how living cells could somehow have arisen in an ancient, inorganic, Earth, the paper explains that this idea should have long since been rejected, but instead it has fueled “sophisticated conjectures with little or no evidential support.”

the dominant biological paradigm - abiogenesis in a primordial soup. The latter idea was developed at a time when the earliest living cells were considered to be exceedingly simple structures that could subsequently evolve in a Darwinian way. These ideas should of course have been critically examined and rejected after the discovery of the exceedingly complex molecular structures involved in proteins and in DNA. But this did not happen. Modern ideas of abiogenesis in hydrothermal vents or elsewhere on the primitive Earth have developed into sophisticated conjectures with little or no evidential support.

In fact, abiogenesis has “no empirical support.”

independent abiogenesis on the cosmologically diminutive scale of oceans, lakes or hydrothermal vents remains a hypothesis with no empirical support

One problem, of many, is that the early Earth would not have supported such monumental evolution to occur:

The conditions that would most likely to have prevailed near the impact-riddled Earth's surface 4.1–4.23 billion years ago were too hot even for simple organic molecules to survive let alone evolve into living complexity

In fact, the whole idea strains credibility “beyond the limit.”

The requirement now, on the basis of orthodox abiogenic thinking, is that an essentially instantaneous transformation of non-living organic matter to bacterial life occurs, an assumption we consider strains credibility of Earth-bound abiogenesis beyond the limit.

All laboratory experiments have ended in “dismal failure.” The information hurdle is of “superastronomical proportions” and simply could not have been overcome without a miracle.

The transformation of an ensemble of appropriately chosen biological monomers (e.g. amino acids, nucleotides) into a primitive living cell capable of further evolution appears to require overcoming an information hurdle of superastronomical proportions, an event that could not have happened within the time frame of the Earth except, we believe, as a miracle. All laboratory experiments attempting to simulate such an event have so far led to dismal failure.

Diversity of Life

But the origin of life is just the beginning of evolution’s problems. For science now suggests evolution is incapable of creating the diversity of life and all of its designs:

Before the extensive sequencing of DNA became available it would have been reasonable to speculate that random copying errors in a gene sequence could, over time, lead to the emergence of new traits, body plans and new physiologies that could explain the whole of evolution. However the data we have reviewed here challenge this point of view. It suggests that the Cambrian Explosion of multicellular life that occurred 0.54 billion years ago led to a sudden emergence of essentially all the genes that subsequently came to be rearranged into an exceedingly wide range of multi-celled life forms - Tardigrades, the Squid, Octopus, fruit flies, humans – to name but a few.

As one of the authors writes, “the complexity and sophistication of life cannot originate (from non-biological) matter under any scenario, over any expanse of space and time, however vast.” As an example, consider the octopus.


First, the octopus is an example of novel, complex features, rapidly appearing and a vast array of genes without an apparent ancestry:

Its large brain and sophisticated nervous system, camera-like eyes, flexible bodies, instantaneous camouflage via the ability to switch colour and shape are just a few of the striking features that appear suddenly on the evolutionary scene. The transformative genes leading from the consensus ancestral Nautilus (e.g., Nautilus pompilius) to the common Cuttlefish (Sepia officinalis) to Squid (Loligo vulgaris) to the common Octopus (Octopus vulgaris) are not easily to be found in any pre-existing life form.

But it gets worse. As Darwin’s God has explained, The Cephalopods demonstrate a highly unique level of adenosine to inosine mRNA editing. It is yet another striking example of lineage-specific design that utterly contradicts macroevolution:

These data demonstrate extensive evolutionary conserved adenosine to inosine (A-to-I) mRNA editing sites in almost every single protein-coding gene in the behaviorally complex coleoid Cephalopods (Octopus in particular), but not in nautilus. This enormous qualitative difference in Cephalopod protein recoding A-to-I mRNA editing compared to nautilus and other invertebrate and vertebrate animals is striking. Thus in transcriptome-wide screens only 1–3% of Drosophila and human protein coding mRNAs harbour an A-to-I recoding site; and there only about 25 human mRNA messages which contain a conserved A-to-I recoding site across mammals. In Drosophila lineages there are about 65 conserved A-sites in protein coding genes and only a few identified in C. elegans which support the hypothesis that A-to-I RNA editing recoding is mostly either neutral, detrimental, or rarely adaptive. Yet in Squid and particularly Octopus it is the norm, with almost every protein coding gene having an evolutionary conserved A-to-I mRNA editing site isoform, resulting in a nonsynonymous amino acid change. This is a virtual qualitative jump in molecular genetic strategy in a supposed smooth and incremental evolutionary lineage - a type of sudden “great leap forward”. Unless all the new genes expressed in the squid/octopus lineages arose from simple mutations of existing genes in either the squid or in other organisms sharing the same habitat, there is surely no way by which this large qualitative transition in A-to-I mRNA editing can be explained by conventional neo-Darwinian processes, even if horizontal gene transfer is allowed. 


In the twentieth century Lamarckian Inheritance was an anathema for evolutionists. Careers were ruined and every evolutionist knew the inheritance of acquired characteristics sat right along the flat earth and geocentrism in the history of ideas. The damning of Lamarck, however, was driven by dogma rather than data, and today the evidence has finally overcome evolutionary theory.

Indeed there is much contemporary discussion, observations and critical analysis consistent with this position led by Corrado Spadafora, Yongsheng Liu, Denis Noble, John Mattick and others, that developments such as Lamarckian Inheritance processes (both direct DNA modifications and indirect, viz. epigenetic, transmissions) in evolutionary biology and adjacent fields now necessitate a complete revision of the standard neo-Darwinian theory of evolution or “New Synthesis " that emerged from the 1930s and 1940s.

Indeed, we now know of a “plethora of adaptive Lamarckian-like inheritance mechanisms.”

There is, of course, nothing new in this paper. We have discussed these, and many, many other refutations of evolutionary theory. Yet the paper is significant because it appears in a peer-reviewed journal. Science is, if anything, conservative. It doesn’t exactly “follow the data,” at least until it becomes OK to do so. There are careers and reputations at stake.

And of course, there is religion.

Religion drives science, and it matters.

Saturday, May 12, 2018

Centrobin Found to be Important in Sperm Development

Numerous, Successive, Slight Modifications

Proteins are a problem for theories of spontaneous origins for many reasons. They consist of dozens, or often hundreds, or even thousands of amino acids in a linear sequence, and while many different sequences will do the job, that number is tiny compared to the total number of sequences that are possible. It is a proverbial needle-in-the-haystack problem, far beyond the reach of blind searches. To make matters worse, many proteins are overlapping, with portions of their genes occupying the same region of DNA. The same set of mutations would have to result in not one, but two proteins, making the search problem that much more tricky. Furthermore, many proteins perform multiple functions. Random mutations somehow would have to find those very special proteins that can perform double duty in the cell. And finally, many proteins perform crucial roles within a complex environment. Without these proteins the cell sustains a significant fitness degradation. One protein that fits this description is centrobin, and now a new study shows it to be even more important than previously understood.

Centrobin is a massive protein of almost a thousand amino acids. Its importance in the division of animal cells has been known for more than ten years. An important player in animal cell division is the centrosome organelle which organizes the many microtubules—long tubes which are part of the cell’s cytoskeleton. Centrobin is one of the many proteins that helps the centrosome do its job. Centrobin depletion causes “strong disorganization of the microtubule network,” and impaired cell division.

Now, a new study shows just how important centrobin is in the development of the sperm tail. Without centrobin, the tail, or flagellum, development is “severely compromised.” And once the sperm is formed, centrobin is important for its structural integrity. As the paper concludes:

Our results underpin the multifunctional nature of [centrobin] that plays different roles in different cell types in Drosophila, and they identify [centrobin] as an essential component for C-tubule assembly and flagellum development in Drosophila spermatogenesis.

Clearly centrobin is an important protein. Without it such fundamental functions as cell division and organism reproduction are severely impaired.

And yet how did centrobin evolve?

Not only is centrobin a massive protein, but there are no obvious candidate intermediate structures. It is not as though we have that “long series of gradations in complexity” that Darwin called for:

Although the belief that an organ so perfect as the eye could have been formed by natural selection, is enough to stagger any one; yet in the case of any organ, if we know of a long series of gradations in complexity, each good for its possessor, then, under changing conditions of life, there is no logical impossibility in the acquirement of any conceivable degree of perfection through natural selection.

Unfortunately, in the case of centrobin, we do not know of such a series. In fact, centrobin would seem to be a perfectly good example of precisely how Darwin said his theory could be falsified:

If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case.  

Darwin could “find out no such case,” but he didn’t know about centrobin. Darwin required “a long series of gradations,” formed by “numerous, successive, slight modifications.”

With centrobin we are nowhere close to fulfilling these requirements. In other words, today’s science falsifies evolution. This, according to Darwin’s own words.

Religion drives science, and it matters.

Monday, April 30, 2018

Meet Jamie Jensen: What Are They Teaching at Brigham Young University?

Bacterial Resistance to Antibiotics

Rachel Gross’ recent article about evolutionist’s public outreach contains several misconceptions that are, unfortunately, all too common. Perhaps most obvious is the mythological Warfare Thesis that Gross and her evolutionary protagonists heavily rely on. Plumbing the depths of ignorance, Gross writes:

Those who research the topic call this paradigm the “conflict mode” because it pits religion and science against each other, with little room for discussion. And researchers are starting to realize that it does little to illuminate the science of evolution for those who need it most.

“Those who research the topic call this paradigm the ‘conflict mode’”?


This is reminiscent of Judge Jones endorsement of Inherit the Wind as a primer for understanding the origins debate, for it is beyond embarrassing. Exactly who are those “who research the topic” to which Gross refers?

Gross is apparently blithely unaware that there are precisely zero such researchers. The “conflict mode” is a long-discarded, failed view of history promoted in Inherit the Wind, a two-dimensional, upside-down rewrite of the 1925 Monkey Trial.

But ever since, evolutionists have latched onto the play, and the mythological history it promotes, in an unabashed display of anti-intellectualism. As Lawrence Principe has explained:

The notion that there exists, and has always existed, a “warfare” or “conflict” between science and religion is so deeply ingrained in public thinking that it usually goes unquestioned. The idea was however largely the creation of two late nineteenth-century authors who confected it for personal and political purposes. Even though no serious historians of science acquiesce in it today, the myth remains powerful, and endlessly repeated, in wider circles

Or as Jeffrey Russell writes:

The reason for promoting both the specific lie about the sphericity of Earth and the general lie that religion and science are in natural and eternal conflict in Western society, is to defend Darwinism. The answer is really only slightly more complicated than that bald statement.

Rachel Gross is, unfortunately, promoting the “general lie” that historians have long since been warning of. Her article is utter nonsense. The worst of junk news.

But it gets worse.

Gross next approvingly quotes Brigham Young University associate professor Jamie Jensen whose goal is to inculcate her students with Epicureanism. “Acceptance is my goal,” says Jensen, referring to her teaching of spontaneous origins in her Biology 101 class at the Mormon institution.

As we have explained many times, this is how evolutionists think. Explaining their anti-scientific, religious beliefs is not enough. You must believe. As Jensen explains:

By the end of Biology 101, they can answer all the questions really well, but they don’t believe a word I say. If they don’t accept it as being real, then they’re not willing to make important decisions based on evolution — like whether or not to vaccinate their child or give them antibiotics.

Whether or not to give their child antibiotics?

As we have discussed many times before, the equating of “evolution” with bacterial resistance to antibiotics is an equivocation and bait-and-switch.

The notion that one must believe in evolution to understand bacterial resistance to antibiotics is beyond absurd.

It not only makes no sense; it masks the monumental empirical contradictions that bacterial antibiotic resistance presents to evolution. As a university life science professor, Jensen is of course well aware of these basic facts of biology.

And she gets paid to teach people’s children?

Religion drives science, and it matters.

Saturday, April 28, 2018

Rewrite the Textbooks (Again), Origin of Mitochondria Blown Up

There You Go Again

Why are evolutionists always wrong? And why are they always so sure of themselves? With the inexorable march of science, the predictions of evolution, which evolutionists were certain of, just keep on turning out false. This week’s failure is the much celebrated notion that the eukaryote’s power plant—the mitochondria—shares a common ancestor with the alphaproteobacteria. A long time ago, as the story goes, that bacterial common ancestor merged with an early eukaryote cell. And these two entities, as luck would have it, just happened to need each other. Evolution had just happened to create that early bacterium, and that early eukaryote, in such a way that they needed, and greatly benefited from, each other. And, as luck would have it again, these two entities worked together. The bacterium would just happen to produce the chemical energy needed by the eukaryote, and the eukaryote would just happen to provide needed supplies. It paved the way for multicellular life with all of its fantastic designs. There was only one problem: the story turned out to be false.

The story that mitochondria evolved from the alphaproteobacteria lineage has been told with great conviction. Consider the Michael Gray 2012 paper which boldly begins with the unambiguous truth claim that “Viewed through the lens of the genome it contains, the mitochondrion is of unquestioned bacterial ancestry, originating from within the bacterial phylum α-Proteobacteria (Alphaproteobacteria).

There was no question about it. Gray was following classic evolutionary thinking: similarities mandate common origin. That is the common descent model. Evolutionists say that once one looks at biology through the lens of common descent everything falls into place.

Except that it doesn’t.

Over and over evolutionists have to rewrite their theory. Similarities once thought to have arisen from a common ancestor turn out to contradict the common descent model. Evolutionists are left having to say the similarities must have arisen independently.

And big differences, once thought to show up only in distant species, keep on showing up in allied species.

Biology, it turns out, is full of one-offs, special cases, and anomalies. The evolutionary tree model doesn’t work.

Now, a new paper out this week has shown that the mitochondria and alphaproteobacteria don’t line up the way originally thought. That “unquestioned bacterial ancestry” turns out to be, err, wrong.

The paper finds that mitochondria did not evolve from the currently hypothesized alphaproteobacterial ancestor, or from “any other currently recognized alphaproteobacterial lineage.”

The paper does, however, make a rather startling claim. The authors write:

our analyses indicate that mitochondria evolved from a proteobacterial lineage that branched off before the divergence of all sampled alphaproteobacteria.

Mitochondria evolved from a proteobacterial lineage, predating the alphaproteobacteria?

That is a startling claim because, well, simply put there is no evidence for it. The lack of evidence is exceeded only by the evolutionist’s confidence. Note the wording: “indicate.”

The evolutionist’s analyses indicate this new truth.

How can the evolutionists be so sure of themselves in the absence of literally any evidence?

The answer is, because they are evolutionists. They are completely certain that evolution is true. And since evolution must be true, the mitochondria had to have evolved from somewhere. And the same is true for the alphaproteobacteria. They must have evolved from somewhere.

And in both cases, that somewhere must be the earlier proteobacterial lineage. There are no other good evolutionary candidates.

Fortunately this new claim cannot be tested (and therefore cannot be falsified), because the “proteobacterial lineage” is nothing more than an evolutionary construct. Evolutionists can search for possible extant species for hints of a common ancestor with the mitochondria, but failure to find anything can always be ascribed to extinction of the common ancestor.

This is where evolutionary theory often ends up: failures ultimately lead to unfalsifiable truth claims. Because heaven forbid we should question the theory itself.

Religion drives science, and it matters.

Tuesday, April 24, 2018

New Ideas on the Evolution of Photosynthesis Reaction Centers

Pure Junk

Evolutionists do not have a clear understanding of how photosynthesis arose, as evidenced by a new paper from Kevin Redding’s laboratory at Arizona State University which states that:

After the Type I/II split, an ancestor to photosystem I fixed its quinone sites and then heterodimerized to bind PsaC as a new subunit, as responses to rising O2 after the appearance of the oxygen-evolving complex in an ancestor of photosystem II. These pivotal events thus gave rise to the diversity that we observe today.

That may sound like hard science to the uninitiated, but it isn’t.

The Type I/II split is a hypothetical event for which the main evidence is the belief that evolution is true. In fact, according to the science, it is astronomically unlikely that photosynthesis evolved, period.

And so, in typical fashion, the paper presents a teleological (“and then structure X evolved to achieve Y”) narrative to cover over the absurdity:

and then heterodimerized to bind PsaC as a new subunit, as responses to rising O2 …

First, let’s reword that so it is a little clearer: The atmospheric oxygen levels rose and so therefore the reaction center of an early photosynthesis system heterodimerized in order to bind a new protein (which helps with electron transfer).

This is a good example of the Aristotelianism that pervades evolutionary thought. This is not science, at least in the modern sense. And as usual, the infinitive form (“to bind”) provides the telltale sign. In other words, a new structure evolved as a response to X (i.e., as a response to the rising oxygen levels) in order to achieve Y (i.e., to achieve the binding of a new protein, PsaC).

But it gets worse.

Note the term: “heterodimerized.” A protein machine that consists of two identical proteins mated together is referred to as a “homodimer.” If two different proteins are mated together it is a “heterodimer.” In some photosynthesis systems, at the core of the reaction center is a homodimer. More typically, it is a heterodimer.

The Redding paper states that the ancient photosynthesis system “heterodimerized.” In other words, it switched, or converted, the protein machine from a homodimer to a heterodimer (in order to bind PsaC). The suffix “ize,” in this case, means to cause to be or to become. The ancient photosynthesis system caused the protein machine to become a heterodimer.

Such teleology reflects evolutionary thought and let’s be clear—this is junk science. From a scientific perspective there is nothing redeeming here. It is pure junk.

But it gets worse.

These pivotal events thus gave rise to the diversity that we observe today.

Or as the press release described it:

Their [reaction centers’] first appearance and subsequent diversification has allowed photosynthesis to power the biosphere for over 3 billion years, in the process supporting the evolution of more complex life forms.

So evolution created photosynthesis which then, “gave rise to” the evolution of incredibly more advanced life forms. In other words, evolution climbed an astronomical entropic barrier and created incredibly unlikely structures which were crucial for the amazing evolutionary history to follow.

The serendipity is deafening.

Religion drives science, and it matters.

Wednesday, April 18, 2018

The Dinosaur “Explosion”

As Though They Were Planted There

In the famed Cambrian Explosion most of today’s animal phyla appeared abruptly in the geological strata. How could a process driven by blind, random mutations produce such a plethora of new species? Evolutionist Steve Jones has speculated that the Cambrian Explosion was caused by some crucial change in DNA. “Might a great burst of genetic creativity have driven a Cambrian Genesis and given birth to the modern world?” [1] What explanations such as this do not address is the problem of how evolution overcame such astronomical entropic barriers. Rolling a dice, no matter how creatively, is not going to design a spaceship.

The Cambrian Explosion is not the only example of the abrupt appearance of new forms in the fossil record, and the other examples are no less easy for evolution to explain. Nor has the old saw, that it’s the fossil record’s fault, fared well. There was once a time when evolutionists could appeal to gaps in the fossil record to explain why the species appear to arise abruptly, but no more. There has just been too much paleontology work, such as a new international study on dinosaurs published this week, confirming exactly what the strata have been showing all along: new forms really did arise abruptly.

The new study narrows the dating of the rise of dinosaurs in the fossil record. It confirms that many dinosaur species appeared in an “explosion” or what “we term the ‘dinosaur diversification event (DDE)’.” It was an “explosive increase in dinosaurian abundance in terrestrial ecosystems.” As the press release explains,

First there were no dinosaur tracks, and then there were many. This marks the moment of their explosion, and the rock successions in the Dolomites are well dated. Comparison with rock successions in Argentina and Brazil, here the first extensive skeletons of dinosaurs occur, show the explosion happened at the same time there as well.

As lead author Dr Massimo Bernardi at the University of Bristol explains, “it’s amazing how clear cut the change from ‘no dinosaurs’ to ‘all dinosaurs’ was.

There just isn’t enough time, and it is another example of a failed prediction of the theory of evolution.

1. Steve Jones, Darwin’s Ghost, p. 206, Random House, New York, 2000.

h/t: The genius.

Sunday, April 15, 2018

Andreas Wagner: Genetic Regulation Drives Evolutionary Change

A Hall of Mirrors

A new paper from Andreas Wagner and co-workers argues that a key and crucial driver of evolution is changes to the interaction between transcription factor proteins and the short DNA sequences to which they bind. In other words, evolution is driven by varying the regulation of protein expression (and a particular type of regulation—the transcription factor-DNA binding) rather than varying the structural proteins themselves. Nowhere does the paper address or even mention the scientific problems with this speculative idea. For example, if evolution primarily proceeds by random changes to transcription factor-DNA binding, creating all manner of biological designs and species, then from where did those transcription factors and DNA sequences come? The answer—that they evolved for some different, independent, function; itself an evolutionary impossibility—necessitates astronomical levels of serendipity. Evolution could not have had foreknowledge. It could not have known that the emerging transcription factors and DNA sequence would, just luckily, be only a mutation away from some new function. This serendipity problem has been escalating for years as evolutionary theory has repeatedly failed, and evolutionists have applied ever more complex hypotheses to try to explain the empirical evidence. Evolutionists have had to impute to evolution increasingly sophisticated, complex, higher-order, mechanisms. And with each one the theory has become ever more serendipitous. So it is not too surprising that evolutionists steer clear of the serendipity problem. Instead, they cite previous literature as a way of legitimizing evolutionary theory. Here I will show examples of how this works in the new Wagner paper.

The paper starts right off with the bold claim that “Changes in the regulation of gene expression need not be deleterious. They can also be adaptive and drive evolutionary change.” That is quite a statement. To support it the paper cites a classic 1975 paper by Mary-Claire King and A. C. Wilson entitled “Evolution at two levels in humans and chimpanzees.” The 1975 paper admits that the popular idea and expectation that evolution occurs by mutations in protein-coding genes had largely failed. The problem was that, at the genetic level, the two species were too similar:

The intriguing result, documented in this article, is that all the biochemical methods agree in showing that the genetic distance between humans and the chimpanzee is probably too small to account for their substantial organismal differences.

Their solution was to resort to a monumental shift in evolutionary theory: evolution would occur via the tweaking of gene regulation.

We suggest that evolutionary changes in anatomy and way of life are more often based on changes in the mechanisms controlling the expression of genes than on sequence changes in proteins. We therefore propose that regulatory mutations account for the major biological differences between humans and chimpanzees.

In other words, evolution would have to occur not by changing proteins, but by changing protein regulation. What was left unsaid was that highly complex, genetic regulation mechanisms would now have to be in place, a priori, in order for evolution to proceed.

Where did those come from?

Evolution would have to create highly complex, genetic regulation mechanisms so that evolution could occur.

Not only would this ushering in of serendipity to evolutionary theory go unnoticed, it would, incredibly, be cited thereafter as a sort of evidence, in its own right, showing that evolution occurs by changes to protein regulation.

But of course the 1975 King-Wilson paper showed no such thing. The paper presupposed the truth of evolution, and from there reasoned that evolution must have primarily occurred via changes to protein regulation. Not because anyone could see how that could occur, but because the old thinking—changes to proteins themselves—wasn’t working.

This was not, and is not, evidence that changes in the regulation of gene expression can be “adaptive and drive evolutionary change,” as the Wagner paper claimed.

But this is how the genre works. The evolution literature makes unfounded claims that contradict the science, and justifies those claims with references to other evolution papers which do the same thing. It is a web of deceit.

Ultimately it all traces back to the belief that evolution is true.

The Wagner paper next cites a 2007 paper that begins its very first sentence with this unfounded claim:

It has long been understood that morphological evolution occurs through alterations of embryonic development.

I didn’t know that. And again, references are provided. This time to a Stephen Jay Gould book and a textbook, neither of which demonstrate that “morphological evolution occurs through alterations of embryonic development.”

These sorts of high claims by evolutionists are ubiquitous in the literature, but they never turn out to be true. Citations are given, and those in turn provide yet more citations. And so on, in a seemingly infinite hall of mirrors, where monumental assertions are casually made and immediately followed by citations that simply do the same thing.

Religion drives science, and it matters.

Saturday, April 14, 2018

IC: We Can Say It, But You Can’t

Pre Adaptation

In contrast [to trait loss], the gain of genetically complex traits appears harder, in that it requires the deployment of multiple gene products in a coordinated spatial and temporal manner. Obviously, this is unlikely to happen in a single step, because it requires potentially numerous changes at multiple loci.

If you guessed this was written by an Intelligent Design advocate, such as Michael Behe describing irreducibly complex structures, you were wrong. It was evolutionist Sean Carroll and co-workers in a 2007 PNAS paper.

When a design person says it, it is heresy. When an evolutionist says it, it is the stuff of good solid scientific research.

The difference is the design person assumes a realist view (the genetically complex trait evinces design) whereas the evolutionist assumes an anti-realist view (in spite of all indications, the genetically complex trait must have arisen by blind causes).

To support their position, evolutionists often appeal to a pre adaptation argument. This argument claims that the various sub components (gene products, etc.), needed for the genetically complex trait, were each needed for some other function. Therefore, they evolved individually and independently, only later to serendipitously fit together perfectly and, in so doing, form a new structure with a new function that just happened to be needed. As Richard Dawkins once put it:

The bombardier beetle’s ancestors simply pressed into different service chemicals that already happened to be lying around. That’s often how evolution works.

The problem, of course, is that this is not realistic. To think that each and every one of the seemingly unending, thousands and thousands, of genetically complex traits just happened to luckily arise from parts that just happened to be lying around, is to make one’s theory dependent on too much serendipity.

Religion drives science, and it matters.

Wednesday, April 11, 2018

Leyden and Teixeira: Political “Civil War” Coming Because of Global Warming

The Politicization of Science

Twitter CEO Jack Dorsey recently tweeted that Peter Leyden’s and Ruy Teixeira’s article, “The Great Lesson of California in America’s New Civil War,” is a “Great read.” The article both urges and forecasts a blue-state takeover of America where our current political divide gives way to a Democrat dominion. This new “Civil War” is to begin this year and, like the last one will have an economic cause. Unfortunately, the thinking of Leyden and Teixeira is steeped in scientific ignorance which drives their thesis.

According to Leyden and Teixeira both the last, and now upcoming, Civil Wars are about fundamentally different economic systems that cannot coexist. In the mid nineteenth century it was an agrarian economy dependent on slaves versus a capitalist manufacturing economy dependent on free labor. Today, the conflict is between (i) the red states which are dependent on carbon-based energy systems like coal and oil, and (ii) the blue states that are shifting to clean energy and weaning themselves off of carbon. Granting this dubious thesis, why are these two economies so irreconcilable? Because of global warming and the terrible natural disasters it brings:

In the era of climate change, with the mounting pressure of increased natural disasters, something must give.

You read that right. Leyden’s and Teixeira’s thesis is driven by anthropogenic global warming, or AGW, which they sprinkle throughout the article. Red states are bad because they deny it, blue states are good because they face the truth and reckon with it with progressive policies. After all, it is “the scientific consensus that climate change is happening, that human activity is the main cause, and that it is a serious threat.”

It must be nice to go through life with such certainty. Ignorance, as they say, is bliss.

We can begin with the most obvious mistake. While it certainly revs people up to hear that global warming is “a serious threat,” we have little evidence for this. Even those “consensus” scientists agree that we are not justified in claiming the sky is falling. And, no, in spite of what you may have heard, the recent hurricanes were probably not products of global warming.

But what about that scientific consensus that Leyden and Teixeira speak of? Doesn’t that make their case?

Unfortunately, Leyden and Teixeira are the latest example of how historians have utterly failed. In spite of their best efforts, historians, and especially historians of science, have not been able to disabuse people of the myths of science.

In science, as in politics, majorities are majorities until they aren’t. A scientific consensus can occur both for theories that end up enshrined in museums and for theories that end up dumped in the trash bin.

Once upon a time the scientific consensus held the Earth was the center of the universe. Only later did the scientific consensus shift to the Sun as the center of the universe.

Both were wrong.

What Mr. Nelson taught you in seventh grade history class was right after all: If you don’t understand history you will repeat its mistakes. And Leyden and Teixeira are today’s poster children of such naiveté.

A scientific consensus for a theory means just one thing: That the majority of scientists accept the theory. Nothing more, nothing less. The problem with science, as Del Ratzsch once explained, is that it is done by people.

What we do know about AGW is that the data have been massaged, predictions have failed, publications have been manipulated, enormous pressure to conform has been applied, and ever since Lynn White’s 1966 AAAS talk the science has been thoroughly politicized.

None of this means that AGW is false, but the theories that end up in textbooks and museums don’t usually need enormous social and career pressures for sustenance.

As it stands scientists have been walking back the hype (it’s climate change, not global warming anymore), and trying to explain the lack of a hockey stick temperature rise (the ocean is temporarily absorbing the heat); insiders are backing out (see here and here), and new papers are showing current temperatures have not been so out of the ordinary (e.g., here).

AGW is certainly an important theory to study. And perhaps it is true. But its track record of prediction is far more important than the number of people voting for it.

The idea that AGW is the driver behind a new Civil War in America to start, err, later this year is simply absurd. I’m less concerned about Leyden’s and Teixeira’s political desires as I am about the mythologies they are built on.

Religion drives science, and it matters.

Sunday, April 8, 2018

Ethan Siegel Updates the Drake Equation

Not Even Wrong

Astrophysicist Ethan Siegel may not have been aware of the phosphorous problem when he wrote his article on fixing the Drake Equation which appeared at Forbes last week. But he certainly should have known about origin of life problem. His failure to account for the former is a reasonable mistake, but his failure to account for the latter is not.

The Drake Equation is simply the product of a set of factors, estimating the number of active, technologically-advanced, communicating civilizations in our galaxy—the Milky Way. Siegel brings the Drake Equation up to date with a few modifications.

He is careful to ensure that his final result is not too large and not too small. Too large an estimate would contradict the decades-long SETI (search for extraterrestrial intelligence) project which, err, has discovered precisely zero radio signals in the cosmos that could be interpreted as resulting from an intelligent civilization. Too small an estimate would signal an end to Siegel’s investigations of extraterrestrial intelligence.

What is needed is a Goldilocks numbers—not too large and not too small. Siegel optimistically arrives at a respectable 10,000 worlds in the Milky Way “teeming with diverse, multicellular, highly differentiated forms of life,” but given the length of time any such civilization is likely to exist, there is only a 10% chance of such a civilization existing co-temporally with us.

Ahh, just right. Small enough to avoid contradicting SETI, but large enough to be interesting.

But Siegel’s value of 25% for the third factor, the fraction of stars with the right conditions for habitability, seems much too high give new research indicating phosphorus is hard to come by in the cosmos.

The problem, it seems, is that phosphorus (the P in the ubiquitous energy-carrying ATP molecule you learned about in high school biology class) is created only in the right kind of supernovae, and there just isn’t enough to go around. As one of the researchers explained:

The route to carrying phosphorus into new-born planets looks rather precarious. We already think that only a few phosphorus-bearing minerals that came to the Earth—probably in meteorites—were reactive enough to get involved in making proto-biomolecules. If phosphorus is sourced from supernovae, and then travels across space in meteoritic rocks, I'm wondering if a young planet could find itself lacking in reactive phosphorus because of where it was born? That is, it started off near the wrong kind of supernova? In that case, life might really struggle to get started out of phosphorus-poor chemistry, on another world otherwise similar to our own.

This could be trouble for Siegel. The problem in his goal-seeked 10% result he has committed to specific values. The wiggle room is now gone, and new findings such as the phosphorus problem will only make things worse. Siegel’s 10% result could easily drop by 10 orders of magnitude or more on the phosphorus problem alone.

That would be devastating, but it would be nothing compared to a realistic accounting for the origin of life problem. That is Siegel’s fifth factor and he grants it a value of 1-in-10,000. That is, for worlds in habitable zones, there is a 1/10,000 probability of life arising from non-life, at some point in the planet’s history.

That is absurd. Siegel pleads ignorance, and claims 1-in-10,000 is “as good a guess as any,” but of course it isn’t.

We can begin by dispelling the silly proclamations riddling the literature, that the origin of life problem has been essentially solved. As the National Academy of Sciences once declared:

For those who are studying the origin of life, the question is no longer whether life could have originated by chemical processes involving nonbiological components. The question instead has become which of many pathways might have been followed to produce the first cells [1]

Fortunately the National Academy of Sciences has since recanted that non-scientific claim, and admitted there is no such solution at hand. Such scientific realism can now be found elsewhere as well.

The origin of life problem has not been solved, not even close. But that doesn’t mean we are left with no idea of how hard the problem is, and that 1-in-10,000 (i.e., 10^-4) is “as good a guess as any,” as Siegel claims. Far from it. Even the evolution of a single protein has been repeatedly shown to be far, far less likely than 10^-4.

As for something more complicated than a single protein, one study estimated the chances of a simple replicator evolving at 1 in 10^1018. It was a very simple calculation and a very rough estimate. But at least it is a start.

One could argue that the origin of life problem is more difficult than that, or less difficult than that. But Siegel provided no rational at all. He laughably set the bounds at 1-in-ten and one-in-a-million, and then with zero justification arbitrarily picked 1-in-10,000.

In other words, Siegel set the lower and upper limits at 10^-1 and 10^-6, when even a single protein has been estimated at about 10^-70, and a simple replicating system at 10^-1018.

Siegel’s estimate is not realistic. With zero justification or empirical basis, Siegel set the probability of the origin of life at a number that is more than 1,000 orders or magnitude less than what has been estimated.

Siegel’s estimate was not one thousand times too optimistic, it was one thousand orders of magnitude too optimistic. It was not too optimistic by three zeros; it was too optimistic by one thousand zeros. Siegel is not doing science. He is goal-seeking, using whatever numbers he needs to get the right answer.

Religion drives science, and it matters.

Brochosome Proteins Encoded By Orphan Genes

A Pattern Problem

A few years ago Paul Nelson debated Joel Velasco on the topic of design and evolution. Nelson masterfully demonstrated design in nature. For his part Velasco also provided an excellent defense of evolution. But the Epicurean claim that the world arose via random chance is not easy to defend, and Velasco’s task would be challenging. Consider, for example, the orphans which Nelson explained are a good example of taxonomically-restricted designs. Such designs make no sense on evolution, and though Velasco responded with many rebuttals, none were very convincing. Since that debate the orphan problem has become worse, as highlighted by a new study of brochosomes.


The term orphan refers to a DNA open reading frame, or ORF, without any known similar sequence in other species or lineages, and hence ORFan or “orphan.” Since orphans are unique to a particular species or lineage, they contradict common ancestry’s much celebrated nested hierarchy model.

The Nelson-Valasco Debate

Velasco addressed the orphan problem with several arguments. First, Velasco reassured the audience that there isn’t much to be concerned with here because “Every other puzzle we’ve ever encountered in the last 150 years has made us even more certain of a fact that we already knew, that we’re all related.”

Second, Velasco argued that the whole orphan problem is contrived, as it is nothing more than a semantic misunderstanding—a confusion of terms. These are nothing more than open reading frames without significant similarity to any known sequence.

Third, Velasco argued that many of the orphans are so categorized merely because the search for similar sequence is done only in “very distantly related” species.

Furthermore, and fourth, Velasco argued that orphans are really nothing more than a gap in our knowledge. For the more we know about a species, the more the orphan problem goes away. And which species do we know the most about? Ourselves of course. And we have no orphans: “Well what about humans, we know a lot about humans. How many orphan genes are in humans? What do you think? Zero.”

In fact, and fifth, Velasco argued that while new orphans are discovered with each new genome that is decoded, the trend is slowing and is suggestive that in the long run relatives for these orphans will be found: “In fact if you trend the absolute number going up, as opposed to the percentage of orphan genes in organisms, that number is going down.”

So to summarize Velasco’s position, the orphan problem will be solved so don’t worry about, but actually orphans are not a problem at all but rather a semantic misunderstanding, but on the other hand the orphan problem is a consequence of incomplete genomic data, but actually on the other hand the problem is a consequence of insufficient knowledge about the species, and in any case even though the number of known orphans keeps on rising, they will eventually go away because the orphans, as a percentage of the overall genomic data (which has been exploding exponentially) is going down.

This string of evolution arguments reminds us of the classic dog-owner’s defense: He’s not my dog, he didn’t bite you, and besides you hit the dog first anyway. Not surprisingly, each of Velasco’s arguments fails, as I explained here.

In fact, there are many orphans, and while function can be difficult to identify, it has been found for many orphans. As science writer Helen Pilcher explained:

In corals, jellyfish and polyps, orphan genes guide the development of explosive stinging cells, sophisticated structures that launch toxin-filled capsules to stun prey. In the freshwater polyp Hydra, orphans guide the development of feeding tentacles around the organism’s mouth. And the polar cod’s orphan antifreeze gene enables it to survive life in the icy Arctic.

Up to a third of genomes have been found have been found to be unique, as this review explains:

Comparative genome analyses indicate that every taxonomic group so far studied contains 10–20% of genes that lack recognizable homologs in other species. Do such ‘orphan’ or ‘taxonomically-restricted’ genes comprise spurious, non-functional ORFs, or does their presence reflect important evolutionary processes? Recent studies in basal metazoans such as Nematostella, Acropora and Hydra have shed light on the function of these genes, and now indicate that they are involved in important species-specific adaptive processes. 

And this is yet another failed prediction of evolution, as this paper explains:

The frequency of de novo creation of proteins has been debated. Early it was assumed that de novo creation should be extremely rare and that the vast majority of all protein coding genes were created in early history of life. However, the early genomics era lead to the insight that protein coding genes do appear to be lineage-specific. Today, with thousands of completely sequenced genomes, this impression remains.

Why then was Velasco so confident and almost nonchalant in his argumentation? Why was he so assured that, one way or another, the orphan problem was not a problem? And why did he believe there are zero orphans in humans, and so it merely is a matter of studying biology, and the orphans will go away?

Lander Orphan Study

It could be due to a significant 2007 study from Eric Lander’s group which rejected most of the large number (several thousands) of orphans that had been tentatively identified in the human genome. The study confidently concluded that “the vast majority” of the orphans were “spurious”:

The analysis here addresses an important challenge in genomics— determining whether an ORF truly encodes a protein. We show that the vast majority of ORFs without cross-species counterparts [i.e., orphans] are simply random occurrences. The exceptions appear to represent a sufficiently small fraction that the best course is would be [sic] consider such ORFs as noncoding in the absence of direct experimental evidence.

The authors went on to propose that “it is time to undertake a thorough revision of the
human gene catalogs by applying this principle to filter the entries.”

That peer-reviewed paper, in a leading journal, was well received (e.g., Larry Moran called it an “excellent study”) and it certainly appeared to be authoritative. So it is not surprising that Velasco would be confident about orphans. For all appearances, they really were no problem for evolution.

There was just one problem. This was all wrong.

There was no scientific evidence that those human sequences, identified as orphans, were “spurious.” The methods used in the Lander study were full of evolutionary assumptions. The results entirely hinged on evolution. Although the paper did not explicitly state this, without the assumption of evolution no such conclusions could have been made.

This is what philosophers refer to as theory-ladenness. Although the paper authoritatively concluded the vast majority of the orphans in the human genome were spurious, this was not an empirical observation or inference, as it might seem to some readers. Their data (and proposed revisions to human gene catalogs), methods, and conclusions were all laden, at their foundation, with the theory of evolution.

So Velasco’s argument was circular. To defend evolution he claimed there were zero orphans in the human genome, but that “fact” was a consequence of assuming evolution is true in the first place. If the assumption of evolution is dropped, then there is no evidence for that conclusion.


Since the Nelson-Velasco debate the orphan problem has just gotten worse. Consider, for example, brochosomes which are intricate, symmetric, secretory granules forming super-oily coatings on the integuments of leafhoppers. Brochosomes develop in glandular segments of the leafhopper’s Malpighian tubules.

The main component of brochosomes, as shown in a recent paper, is proteins. And these constituent proteins, as well as brochosome-associated proteins, are mostly encoded by orphan genes.

As the paper explains, most of these proteins “appear to be restricted to the superfamily Membracoidea, adding to the growing list of cases where taxonomically restricted genes, also called orphans, encode important taxon-specific traits.”

And how did all these orphan genes arise so rapidly? The paper hypothesizes that “It is possible that secreta exported from the organism may evolve especially rapidly because they are not strongly constrained by interactions with other traits.”

That evolutionists can so easily reach for just-so stories, such as this, is yet another example of how false predictions have no consequence for evolutionary theory. Ever since Darwin evolutionists have proclaimed how important it is that the species fall into the common descent pattern. This has especially been celebrated at the molecular level.

But of course the species fall into no such pattern, and when obvious examples present themselves, such as the brochosome proteins, evolutionists do not miss a step.

There is no empirical content to this theory. Predictions hailed as great successes and confirmations of the truth of evolution suddenly mean nothing and have no consequence when the falsification becomes unavoidable.

Religion drives science, and it matters.

h/t: El Hombre

Sunday, April 1, 2018

The Unauthorized Answers to Jerry Coyne’s Blog

What Your Biology Teacher Didn’t Tell You

Jerry Coyne’s website (Why Evolution Is True) has posed study questions for learning about evolution. Evolutionists have responded in the “Comment” section with answers to some of the questions (see here, here, and here). But when I posted a few relevant thoughts, they were quickly deleted after briefly appearing. That’s unfortunate because those facts can help readers to understand evolution. Here is what I posted:

Well the very first question is question begging:

“Why is the concept of homology crucial for even being able to talk about organic structure?”

It isn’t. We are “able to talk about organic structure” without reference to homology. In fact, if you are interested in biology, you can do more than mere talk. Believe it or not you actually can investigate how organic structure works, without even referencing homology. The question reveals the underlying non-scientific Epicureanism at work. This is not to say homology is not an important concept and area of study. Of course it is. But it is absurd to claim it is required even merely to talk about organic structure. Let’s try another:

“What is Darwin’s explanation for homology?”

Darwin’s explanation for homology is that it is a consequence of common descent. He repeatedly argues that homologous structures provide good examples of non-adaptive patterns as well as disutility, thus confirming common descent by virtue of falsifying the utilitarianism-laden doctrine of creation. See for example pp. 199-200, where Darwin concludes:

“Thus, we can hardly believe that the webbed feet of the upland goose or of the frigate-bird are of special use to these birds; we cannot believe that the same bones in the arm of the monkey, in the fore leg of the horse, in the wing of the bat, and in the flipper of the seal, are of special use to these animals. We may safely attribute these structures to inheritance.”

Pure metaphysics, and ignoring the enormous problem that non-adaptive patterns cause for evolutionary theory. Oh my. Well, let’s try another:

“How does Darwin’s account of serial homology (the resemblance of parts within an organism, for example, the forelimbs to the hindlimbs, or of a cervical vertebra to a thoracic vertebra) depend on the repetition of parts or segmentation?”

Hilarious. It’s a wonderful example of teleology, just-so-stories, and metaphysics, so characteristic of the genre, all wrapped up in a single passage (pp. 437-8). Darwin goes into a typical rant of how designs and patterns (serial homologies in this case) absolutely refute utilitarianism. “How inexplicable are these facts on the ordinary view of creation!,” he begins. Pure metaphysics.

He then provides a just-so story about how “we may readily believe that the unknown progenitor of the vertebrata possessed many vertebræ,” etc., and that like any good breeder, natural selection “should have seized on a certain number of the primordially similar elements, many times repeated, and have adapted them to the most diverse purposes.”

Seized on? Wow, that natural selection sure is good—long live Aristotelianism. Gotta love this mythology.

Monday, February 19, 2018

This Didn’t Evolve a Few Mutations At a Time

Action Potentials

Are there long, gradual, pathways of functional intermediate structures, separated by only one or perhaps a few mutations, leading to every single species, and every single design and structure in all of biology? As we saw last time, this has been a fundamental claim and expectation of evolutionary theory which is at odds with the science.* If one mutation is rare, a lot of mutations are astronomically rare. For instance, if a particular mutation has a one-in-a-hundred million (one in 10^8) chance of occurring in a new individual, then a hundred such particular mutations have a one in 10^800 chance of occurring. It’s not going to happen. Let’s have a look at an example: nerve cells and their action potential signals.

[* Note: Some evolutionists have attempted to get around this problem with the neutral theory, but that just makes matters worse].

Nerve cells have a long tail which carries an electronic impulse. The tail can be several feet long and its signal might stimulate a muscle to action, control a gland, or report a sensation to the brain.

Like a cable containing thousands of different telephone wires, nerve cells are often bundled together to form a nerve. Early researchers considered that perhaps the electronic impulse traveled along the nerve cell tail like electricity in a wire. But they soon realized that the signal in nerve cells is too weak to travel very far. The nerve cell would need to boost the signal along the way for it to travel along the tail.

After years of research it was discovered that the signal is boosted by membrane proteins. First, there is a membrane protein that simultaneously pumps two potassium ions into the cell and three sodium ions out of the cell. This sets up a chemical gradient across the membrane. There is more potassium inside the cell than outside, and there is more sodium outside than inside. Also, there are more negatively charged ions inside the cell so there is a voltage drop (50-100 millivolt) across the membrane.

In addition to the sodium-potassium pump, there are also sodium channels and potassium channels. These membrane proteins allow sodium and potassium, respectively, to pass through the membrane. They are normally closed, but when the decaying electronic impulse travels along the nerve cell tail, it causes the sodium channels to quickly open. Sodium ions outside the cell then come streaming into the cell down the electro-chemical gradient. As a result, the voltage drop is reversed and the decaying electronic impulse, which caused the sodium channels to open, is boosted as it continues on its way along the nerve cell tail.

When the voltage goes from negative to positive inside the cell, the sodium channels slowly close and the potassium channels open. Hence the sodium channels are open only momentarily, and now with the potassium channels open, the potassium ions concentrated inside the cell come streaming out down their electro-chemical gradient. As a result the original voltage drop is reestablished.

This process repeats itself as the electronic impulse travels along the tail of the nerve cell, until the impulse finally reaches the end of the nerve cell. Although we’ve left out many details, it should be obvious that the process depends on the intricate workings of the three membrane proteins. The sodium-potassium pump helps set up the electro-chemical gradient, the electronic impulse is strong enough to activate the sodium channel, and then the sodium and potassium channels open and close with precise timing.

How, for example, are the channels designed to be ion-selective? Sodium is about 40% smaller than potassium so the sodium channel can exclude potassium if it is just big enough for sodium. Random mutations must have struck on an amino acid sequence that would fold up just right to provide the right channel size.

The potassium channel, on the other hand is large enough for both potassium, and sodium, yet it is highly efficient. It somehow excludes sodium almost perfectly (the potassium to sodium ratio is about 10000), yet allows potassium to pass through almost as if there were nothing in the way.

Nerve cells are constantly firing off in your body. They control your eyes as you read these words, and they send back the images you see on this page to your brain. They, along with chemical signals, control a multitude of processes in our bodies, and there is no scientific reason to think they gradually evolved, one mutation at time.

Indeed, that idea contradicts everything we know from the science. And yet this is what evolutionists believe. Let me repeat that: evolutionists believe nerve cells and their action potential designs evolved one mutation at time. Indeed, evolutionists believe this is a proven fact, beyond all reasonable doubt.

It would be difficult to imagine a more absurd claim. So let’s have a look at the details of this line of thinking. Here is a recent paper from the Royal Society, representing the state of the art in evolutionary thinking on this topic. The paper claims to provide a detailed explanation of how early evolution produced action potential technology.

Sounds promising, but when evolutionists speak of “details,” they have something slightly different in mind. Here are several passages from the paper which reveal that not only is there a lack of details, but that the study is thoroughly unscientific.

We propose that the next step in the evolution of eukaryote DCS [membrane depolarization (through uncontrolled calcium influx), contraction and secretion] coupling has been the recruitment of stretch-sensitive calcium channels, which allow controlled influx of calcium upon mechanical stress before the actual damage occurs, and thus anticipate the effects of membrane rupture.

The recruitment of calcium channels? And exactly who did the recruiting? Here the authors rely on vague terminology to paper over a host of problematic details of just how random mutations somehow performed this recruiting.

To prevent the actual rupture, the first role of mechanosensory Ca++ channels might have been to pre-activate components of the repair pathway in stretched membranes.

“To prevent”? Let’s spell out the logic a little more clearly. The authors are hypothesizing that these calcium channels evolved the ability to pre-activate the repair pathway “to prevent” actual rupture. By spelling out the logic a bit more clearly, we can see more easily the usual teleology at work. The evolution literature is full of teleology, and for good reason. Evolutionists are unable to formulate and express their ideas without it. The ever-present infinitive form is the tell-tale sign. Aristotelianism is dead—long live Aristotelianism.

As another anticipatory step, actomyosin might have been pre-positioned under the plasma membrane (hence the cortical actomyosin network detected in every eukaryotic cell) and might have also evolved direct sensitivity to stretch … Once its cortical position and mechanosensitivity were acquired, the actomyosin network could automatically fulfil an additional function: cell-shape maintenance—as any localized cell deformation would stretch the cortical actomyosin network and trigger an immediate compensatory contraction. This property would have arisen as a side-effect (a ‘spandrel’) of the presence of cortical actomyosin for membrane repair, and quickly proved advantageous.

An “anticipatory step”? “Pre-positioning”? Actomyosin “evolved” sensitivity to stretch? The position and mechanosensitivity “were acquired”? The network could “fulfil an additional function”? Sorry, but molecular machines (such as actomyosin) don’t “evolve” anything. There is more teleology packed into these few sentences than any medieval tract. And for good measure the authors also add the astonishing serendipity that this additional function “would have arisen as a side-effect.” That was lucky.

Once covering the cell cortex, the actomyosin network acquired the ability to deform the cell by localized contraction.

The actomyosin network “acquired the ability” to deform the cell by localized contraction? Smart move on the part of the network. But may we ask just how did that happen?

Based on the genomic study of the protist Naegleria which has a biphasic life cycle (alternating between an amoeboid and a flagellated phase), amoeboid locomotion has been proposed to be ancestral for eukaryotes. It might have evolved in confined interstitial environments, as it is particularly instrumental for cells which need to move through small, irregularly shaped spaces by exploratory deformation.

Amoeboid locomotion evolved “as it is particularly instrumental.” No infinitive form but this is no less teleological. Things don’t evolve because they are “instrumental.” What the authors fail to inform their readers of is that this would require an enormous number of random mutations.

One can hypothesize that, if stretch-sensitive calcium channels and cortical actomyosin were part of the ancestral eukaryotic molecular toolkit (as comparative genomics indicates), membrane deformation in a confined environment would probably trigger calcium influx by opening of stretch-sensitive channels, which would in turn induce broad actomyosin contraction across the deformed part of the cell cortex, global deformation and cell movement away from the source of pressure.

The concept of a “molecular toolkit” is standard in evolutionary thought, and another example teleological thinking.

One can thus propose that a simple ancestral form of amoeboid movement evolved as a natural consequence of the scenario outlined above for the origin of cortical actomyosin and the calcium–contraction coupling; once established, it could have been further elaborated.

Amoeboid movement evolved “as a natural consequence,” and “once established” was “further elaborated”? This is nothing more than teleological story-telling with no supporting evidence.

It is thus tempting to speculate that, once calcium signalling had gained control over primitive forms of amoeboid movement, the same signalling system started to modify ciliary beating, possibly for ‘switching’ between locomotor states.

Calcium signaling “gained control” and then “started to modify” ciliary beating “for ‘switching’ between locomotor states”? The “for switching” is yet another infinitive form, and “gained control” is an active move by the calcium signaling system. Pure, unadulterated, teleology.

Possibly, in ancestral eukaryotes calcium induced a relatively simple switch (such as ciliary arrest, as still seen in many animal cells and in Chlamydomonas in response to high Ca++ concentrations), which was then gradually modified into more subtle modulations of beating mode with a fast turnover of molecular actors mediated by differential addition, complementation and loss.

“Calcium induced a relatively simple switch”? Sorry, ions don’t induce switches, simple or otherwise. And the switch “was then gradually modified into more subtle modulations”? Note how the passive voice obviates those thorny details. The switch “was modified” conveniently omits the fact that such modification would have to occur via random mutation, one mutation at a time.

Alternatively, control of cilia by calcium could have evolved convergently—but such convergence would then have been remarkably ubiquitous, as there seems to be no eukaryotic flagellum that is not controlled by calcium in one way or another.

“Could have evolved convergently”? And exactly how would that happen? At least the authors then admit to the absurdity of that alternative.

Unfortunately, they lack such sensibility for the remainder of the paper. As we saw above, the paper is based on a sequence of teleological thinking. It falls into the evolutionary genre where evolution is taken, a priori, as a given. This going in assumption underwrites vast stretches of teleological thought, and cartoon-level story telling. Not only is there a lack of empirical support, but the genre is utterly unscientific, as revealed by even a mildly critical reading.

And needless to say, the paper does absolutely nothing to alleviate the problem we began with. The many leaps of logic and reasoning in the paper reveal all manner of monumental changes evolution requires to construct nerve cells and the action potential technology. We are not looking at a narrative of minute, gradual changes, each contributing to the overall fitness. Many, many simultaneous mutations are going to be needed. Even a conservative minimum number of 100 simultaneous mutations leads to the untenable result of a one in 10^800 chance of occurring.

It’s not going to happen. Religion drives science, and it matters.

Saturday, February 10, 2018

Here is How Evolutionists Respond to the Evidence


Mutations are rare and good ones are even more rare. One reason mutations are rare is because there are sophisticated error correction mechanisms in our cells. So according to evolution random mutations created correction mechanisms to suppress random mutations. And that paradox is only the beginning. Because error correction mechanisms, as with pretty much everything else in biology, require many, many mutations to be created. If one mutation is rare, a lot of mutations are astronomically rare. For instance, if a particular mutation has a one-in-a-million (one in 10^6) chance of occurring in a new individual, then a hundred such particular mutations have a one in 10^600 chance of occurring. It’s not going to happen.

How do evolutionists reckon with this scientific problem?

First, one common answer is to dismiss the question altogether. Evolution is a fact, don’t worry about the details. Obviously this is not very compelling.

Second, another common answer is to cast the problem as a strawman argument against evolution, and appeal to gradualism. Evolutionists going back to Darwin have never described the process as “poof.” They do not, and never have, understood the process as the simultaneous origin of tens or hundreds, or more mutations. Instead, it is a long, slow, gradual process, as Darwin explained:

If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case […] Although the belief that an organ so perfect as the eye could have been formed by natural selection, is enough to stagger any one; yet in the case of any organ, if we know of a long series of gradations in complexity, each good for its possessor, then, under changing conditions of life, there is no logical impossibility in the acquirement of any conceivable degree of perfection through natural selection

The Sage of Kent could find “no such case”? That’s strange, because they are ubiquitous. And with the inexorable march of science, it is just getting worse. Error correcting mechanisms are just one example of many. Gradualism is not indicated.

What if computer manufacturers were required to have a useful, functional electronic device at each step in the manufacturing process? With each new wire or solder, what must emerge is a “long series of gradations in complexity, each good for its possessor.”

That, of course, is absurd (as Darwin freely confessed). From clothing to jet aircraft, the manufacturing process is one of parts, tools, and raw materials strewn about in a useless array, until everything comes together at the end.

The idea that every single biological structure and design can be constructed by one or two mutations at a time, not only has not been demonstrated, it has no correspondence to the real world. It is just silly.

What evolution requires is that biology is different, but there is no reason to believe such a heroic claim. The response that multiple mutations is a “strawman” argument does not reckon with the reality of the science.

Third, some evolutionists recognize this undeniable evidence and how impossible evolution is. Their solution is to call upon a multiverse to overcome the evidence. If an event is so unlikely it would never occur in our universe, just create a multitude of universes. And how many universes are there? The answer is, as many as are needed. In other words, when confronted with an impossibility, evolutionist simply contrive a mythical solution.

Forth, another common response that evolutionists make is to appeal to the fitness of the structure in question. Biological designs, after all, generally work pretty well, and therefore have high fitness. Is this not enough to prove that it evolved? For evolutionists, if something helps, then it evolves. Presto.

To summarize, evolutionists have four different types of responses to the evidence, and none of the responses do the job.

Religion drives science, and it matters.