Sunday, April 28, 2019

Satellite DNA is Essential and Species-Specific in Drosophila melanogaster

Seems Incompatible

This week’s “we thought it was junk but it turned out to be crucial” study comes with the added bonus that the so-called “junk” is also species-specific / taxonomically restricted. The general topic is tandemly repeated satellite DNA in the much studied fruit fly, Drosophila melanogaster. These satellite DNA regions comprise 15-20% of D. melanogaster’s genome, and one of the regions, AAGAG(n), is transcribed across many of D. melanogaster’s cell types.

While evolutionists have hoped and argued that transcription (not to mention mere presence) does not imply function (after all biology is one big hack-job, so RNA polymerase doesn’t always know what it is doing), D. melanogaster is once again not cooperating. Not only is the satellite DNA ubiquitous and widely transcribed, the AAGAG RNA was found to be important for male fertility. Kind of important.

But it gets worse. Much worse.

Not only is D. melanogaster’s satellite DNA ubiquitous, widely transcribed across many cell types, and of crucial importance, it is species-specific. The levels of AAGAG satellite DNA is orders of magnitude lower in D. simulans and D. sechellia, and nearly absent in other species within the Drosophila genus.

This makes no sense on evolution. Now we must say that not only does a massive quantity of AAGAG satellite DNA abruptly appear in a particular fly species, but it immediately takes on an absolutely crucial role. A role which, of course, was somehow already fulfilled in the putative evolutionary ancestor.

In other words, the function in question (male fertility) was rumbling along just fine, and then with a new species, and not in many of its sister species, the crucial function was somehow rewired and reassigned to a relatively new, massive, DNA satellite sequence.

This is absurd.

Even the paper admits that, “Finally, it is worth noting that the expression of simple satellites for essential functions seems incompatible with the fast evolution of satellite DNAs, reflected in dramatic changes in both sequence types and copy numbers across species.”

Ya think?

The next step will be for evolutionists to convert this spectacular failure into compelling evidence that evolution can produce DNA that is both (i) species-specific, and (ii) functionally essential.

And why is that true?

Because, after all, the satellite DNA evolved, of course. And since it is species-specific and essential, we now have evidence evolution can produce such an unexpected outcome.

That’s just good, solid, scientific research.

Religion drives science, and it matters.

Wednesday, March 6, 2019

Kirk MacGregor: Evolution Proves Molinism

And Molinism Proves Evolution

“Evolution provides a theological solution to a theological problem, and the science is sandwiched somewhere in between. But the theological premises are denied so the theological result is seen as coming from science, and science inappropriately attains the status of truth giver.” I made that observation in Darwin’s God, and unfortunately it remains just as true today. The latest example of this phenomenon comes in the brand new volume, Calvinism and Middle Knowledge where, in Chapter 2, Kirk MacGregor strongly argues that evolution proves Molinism. Molinism was one of the dozen or more religious motivations and mandates for evolutionary thought, and now in the twenty-first century, evolution is used as a proof text for Molinism. See the sequence? Religion drives the science, and the resulting theory is then used to confirm the religion. This can only work where (i) there is a loss of historical continuity, where evolution is seen as an objective, tabula rasa, empirical finding, and (ii) there is a breakdown in the science. Below I summarize MacGregor’s argument, and explain why it fails scientifically, and is incoherent.

MacGregor’s argument

MacGregor explains that he accepts evolution for reasons that anyone familiar with the origins debate will recognize. He cites molecular and morphological evidences and arguments for how evolution could have occurred via a long sequence of mutations, and that evolution has far greater explanatory power than special creation. Just because evolution can occur, however, does not mean it is likely to have occurred. MacGregor accepts the evidences and arguments that evolution is astronomically unlikely to have occurred. It may seem that MacGregor has a dilemma: evolution occurred yet could not occur. But for MacGregor all of this demonstrates the truth of Molinism:

Far from constituting a threat to theism, the macroevolutionary account of life’s origins and development actually demonstrates the existence of God and the supremacy of God’s knowledge.  Due to the astronomically low probabilities of countless trans-group and simultaneous unrelated mutations, which I believe the scientific evidence demonstrates to have occurred, the God who created the universe must be endowed with middle knowledge.  Such knowledge of what would happen in every possible biological scenario, especially those which are causally unconstrained, is the only means whereby God could choose to create a world where this dizzying and interdependent array of biological improbabilities would naturally materialize to generate life in all its complexity.  In other words, the evolutionary schema, with its extraordinarily lengthy chain of remote probabilities, could only unfold in time-space if the God who exists knew what would contingently happen in every possible set of circumstances and then proceeded to create an initial quantum phenomenon which naturally issued in precisely those innumerable contingencies necessary for the evolution of intelligent life.  

In other words, what may appear to be a chance naturalistic process, was actually foreseen by God, and arranged by God by selecting an initial, quantum, state of the universe. MacGregor does not explore, and perhaps has not substantially considered, a key difference between his approach and Molinism; namely (and very simply put), with Molinism God foresees but does not determine the future decisions of His morally free creatures. This will lead to an incoherence in MacGregor approach (more below on this).

Scientific failure

MacGregor provides reasons and evidences for accepting evolution which are typical of the evolutionary literature. He is unaware that these reasons and evidences have been dealt with extensively and have long since failed badly on the science. To begin with, it is an exercise in confirmation testing. For each evidence, complicating factors are simply omitted. And the vast array of contradictory evidences not considered at all. One example of each will suffice.

One of MacGregor’s evidences are “suboptimal improvisations” and the so-called vestigial structures. For example, MacGregor writes that God’s special creation of whales with useless leg bones appears inexplicable, for whales would no longer descend from land-living ancestors with legs. No one could validly argue that such structures resulted from sin, as they emerged on anyone’s reckoning before the Fall.

This classic evolutionary argument from disutility has failed over and over, and it is no different this time. In this case, it has long since been considered that these “useless” whale bones are likely used in reproduction and this was eventually confirmed. As one evolutionist admitted:

Our research really changes the way we think about the evolution of whale pelvic bones in particular, but more generally about structures we call “vestigial.”

Another example of how MacGregor’s evidences do not serve his purposes is the so-called nested hierarchy pattern of biological forms which MacGregor claims is accounted for through the successive branching pattern of evolutionary transformation. Again, this icon of evolution has failed badly. It is simply false that the evolutionary tree pattern is explained by a successive branching pattern. As I have documented many times here, the problem is not that there are a few outliers. We’re not talking about a third-decimal point error. Empirical contradictions to the expected nested hierarchy are everywhere, and at all taxonomic levels. They are pervasive and consistent, and it is fair to say that the so-called nested hierarchy pattern is imposed onto the data rather than read out of the data. Homoplasy is rampant in biology, and there simply is no justification for the “nested hierarchy” pattern, such as it is, as an evidence for evolution. Indeed, by modus tollens what the evidence is telling us is that evolution is false, by any reasonable interpretation of the evidence. If an evidence would be powerful evidence for a theory, then the failure of that evidence must be evidence against the theory.

Incoherence

Finally, MacGregor’s position is incoherent for several reasons. First, as noted above MacGregor claims the so-called “suboptimal improvisations” found in biology are strong evidence for evolution. Evolution explains these, whereas with special creation such evidence is inexplicable, and is ruled out. This centuries-old argument is powerful, but it is theory-laden. This can be seen in MacGregor’s terminology: “improvisations.” On the evolutionary view, such suboptimal designs are “improvisations,” but on the special creation view they are not. Likewise, the term “vestigial structures” is also theory-laden. There is nothing inherently “vestigial” about those whale bones, our appendix, or the many other examples evolutionists cite. There is no measure of “vestigial-ness.” The very language MacGregor uses is steeped in evolutionary assumptions.

In fact, the evolutionist’s claim that such evidence is inexplicable and therefore rules out special creation hinges on the theological doctrine that divine intention is to optimize function and fitness. God must create according to the evolutionary concept that designs are driven by fitness (which, in turn, is defined as a reproductive advantage). We might say MacGregor’s theology is based on evolutionary theory but, of course, this goes back long before evolution. It would probably be more accurate to say MacGregor is a utilitarian, which was an important influence and mandate for evolution.

So the first problem is that MacGregor’s position is theory-laden, and circular. The evidence that MacGregor finds to be a powerful confirmation of evolution entails evolutionary assumptions. The next problem follows on the heels of the first; namely, that MacGregor’s position is unbiblical (the Bible does not present a utilitarian Creator, and at times flatly reveals the opposite). Normally this would not be a problem—anyone can hold and advocate any belief he wants to. But MacGregor’s entire argument is intended to be biblical. So he has a significant internal contradiction, in addition to being theory-laden, and circular.

Another problem is that MacGregor believes the divine creation acts are confined to the beginning. This sort of idea is sometimes labelled as “front-loading.” Rather than divine intervention occurring over time, the Creator sets up the initial conditions just right for the desired result (including the evolution of humanity) to unfold. MacGregor here follows the Greater-God theology which, again, traces back several centuries and was important in mandating an evolutionary origins narrative. The classic case study is Leibniz’s rejection and horror at Newton’s proposal that God tweaks the solar system every so often to maintain its stability. Leibniz slammed the notion as blasphemous. The greater god causes the causes of effects, rather than merely causes the effects themselves, as Darwin’s grandfather Erasmus put it. MacGregor follows this belief, and the resulting front-loading will be instrumental in his confirmation of Molinism.

But this confirmation of Molinism is arrived at not by an objective analysis of the empirical evidence, as MacGregor suggests. MacGregor did not have to incorporate front-loading. Following Robert Russel, MacGregor could have envisioned divine action as occurring, at the quantum level, over time. But that would have violated his greater god theology and, crucially, it would have obviated the confirmation of Molinism. So the thesis of the paper—that Molinism is confirmed by science (evolution in particular)—is false. In addition to the utilitarianism we saw above, the conclusion for Molinism also hinges on the greater god theology. On top of all this, as with utilitarianism, greater god theology also is not biblical. Again, this is a problem for MacGregor because his argument is intended to be biblical.

Finally, MacGregor’s utilitarianism and front-loading are contradictory. MacGregor simultaneously holds that (i) God would not create those suboptimal improvisations (hence evolution must be true), and (ii) God knows all possible futures, and how they are brought about, and He selected an initial state which created the world. This means that God created those suboptimal improvisations, which He would never create.

Unfortunately, religion has infected science and the result is bad religion and bad science.

Saturday, February 23, 2019

The “All Outcomes Are Equiprobable” Argument

I Had to Write “Evolution Is True” 500 Times

I’ve been busy lately with a big landscaping job for the neighborhood evolutionist. He wanted a massive set of stones to be carefully arranged in his backyard. He wanted stones of different colors, and the careful arrangement would spell out “Evolution Is True.”

Unfortunately, the day I finished this big job there was an earthquake in the neighborhood which jumbled the stones I had carefully arranged. I had to go back to the evolutionist’s property and put the stones back in order.

To makes matters worse, the evolutionist wouldn’t pay me for the job. When I sued him he told the judge that I was lying. He said I didn’t do the job, but instead the arrangement of the stones was due to the recent earthquake.

I explained to the judge that such an event would be unlikely, but the evolutionist retorted that landscapers don’t understand probability. The evolutionist explained to the judge that all outcomes are equally probable. Every outcome, whether it spells out “Evolution Is True” or nothing at all, have a probability of one divided by the total number of possible arrangements. He said that I was committing a mistake that is common with nonscientific and uneducated people. He explained that if you toss a coin 500 times the sequence of heads and tails will be astronomically unlikely. But it happened. All such sequences, even if they spell out a message in Morse code, are equiprobable.

The judge agreed. He fined me for bringing a frivolous lawsuit against the evolutionist and made me write “Evolution Is True” 500 times.

Saturday, February 16, 2019

Finally, the Details of How Proteins Evolve

A Step-By-Step Description

How did proteins evolve? It is a difficult question because, setting aside many other problems, the very starting point—the protein-coding gene—is highly complex. A large number of random mutations would seem to be required before you have a functional protein that helps the organism. Too often such problems are solved with vague accounts of “adaptations” and “selection pressure” doing the job. But this week researchers at the University of Illinois announced ground-breaking research that provides a step-by-step, detailed, description of the evolution of a new protein-coding gene and associated regulatory DNA sequences. The protein in question is a so-called “antifreeze” protein that keeps the blood of Arctic codfish from freezing, and the new research provides the specific sequence of mutations, leading to the new gene. It would be difficult to underestimate the importance of this research. It finally provides scientific details answering the age-old question of how nature’s massive complexity could have arisen. As the paper triumphantly declares, “Here, we report clear evidence and a detailed molecular mechanism for the de novo formation of the northern gadid (codfish) antifreeze glycoprotein (AFGP) gene from a minimal noncoding sequence.” Or as lead researcher, professor Christina Cheng, explained, “This paper explains how the antifreeze protein in the northern codfish evolved.” This is a monumental finding. Having the scientific details, down to the level of specific mutations, of how a new protein-coding gene evolved—not from a related gene but from non-coding DNA—is something evolutionists could only dream of only a few short years ago. There’s only one problem: it is all junk science.

The first problem is that this new “research” is, in actuality, a just-so story:

In science and philosophy, a just-so story is an unverifiable narrative explanation for a cultural practice, a biological trait, or behavior of humans or other animals. The pejorative nature of the expression is an implicit criticism that reminds the hearer of the essentially fictional and unprovable nature of such an explanation. Such tales are common in folklore and mythology.

For example, the antifreeze protein is of relatively low complexity chiefly consisting a repeating sequence of three amino acids (threonine-alanine-alanine), and the evolutionists claim that these repeating sequences “strongly suggest” that the protein-coding gene “evolved from repeated duplications of an ancestral 9-nucleotide threonine-alanine-alanine-coding element.”

Why is that true?

Why does a repeating genetic sequence “strongly suggest” that it “evolved from repeated duplications?” What experiment revealed this truth? What evidence gives us this profound principle? The answer, of course, is that there is none. Nowhere do the evolutionists justify this claim because there is no empirical justification.

There is no scientific evidence for it. Zero.

The paper continues with yet more non-empirical claims. Those nine nucleotides “likely originated within a pair of conserved 27-nucleotide” segments that flank each side of the repetitive region. And these four 27-nucleotide segments are similar to each other, “indicating they resulted from the duplication of an initial copy.” As the paper concludes, “chance duplications” of an ancestral 27-nucleotide segment “produced four tandem copies.”

But why are those claims true? Why do such similarities imply an origin via evolutionary mechanisms? The problem is, they don’t. There is no empirical evidence for any of this. This is completely evidence-free.

The evolutionists next explain that the 9-nucleotide segment duplicated a large number of times because it worked well:

We hypothesize that, upon the onset of selective pressure from cold polar marine conditions, duplications of a 9-nt ancestral element in the midst of the four GCA-rich duplicates occurred.

The above quote is an example of the non-empirical, teleology that pervades evolutionary thought. It was upon the onset of cold conditions that the needed genetic duplications occurred. This is not empirical; this is story-telling.

The paper continues with a series of one-time, contingent events crucial to their story and non-empirical claims. The genetic sequence “was appropriately delimited by an existing in-frame termination codon.”

Appropriately delimited?

The presence of a region in two of the species “indicates that it existed in the gadid ancestor before the emergence of the AFGP.” The absence of a thymine nucleotide at a location in some of the species “very likely resulted from a deletion event,” causing a fortuitous frameshift which supplied the crucial signal peptide segment, telling cellular machinery that the protein should be secreted to the bloodstream. As the paper concludes, “the emerging AFGP gene was thus endowed with the necessary secretory signal.”

Endowed with the necessary signal?

There is no empirical evidence for any of this.

Another problem with this just-so account, is the substantial level of serendipity required. The new antifreeze protein did not arise from some random DNA sequence, but rather from crucial, preexisting segments of DNA that just happened to be lying around. In other words, the fish were facing a colder environment, they needed some antifreeze in their blood, and the pieces needed for such an antifreeze gene were fortuitously available.

The authors hint at this serendipity when they conclude that their story of how this protein evolved is an example of “evolutionary ingenuity.”

Evolutionary ingenuity?

The press release is even more revealing. Cheng admits that the evolution of this gene “occurred as a result of a series of seemingly improbable, serendipitous events.” For “not just any random DNA sequence can produce a viable protein.” Furthermore, in addition to the gene itself, “several other serendipitous events occurred.”

The DNA was “edited in just the right way,” and “somehow, the gene also obtained the proper control sequence that would allow the new gene to be transcribed into RNA.”

Even the evolutionists admit to the rampant serendipity. Nonetheless they are triumphant, for “the findings offer fresh insights into how a cell can invent ‘a new, functional gene from scratch.’”

Fresh insights?

In actuality the findings arose from a series of non-empirical claims.

Religion drives science, and it matters.

Thursday, July 26, 2018

What is a Dependency Graph?

Information Organization

A recent paper, authored by Winston Ewert, uses a dependency graph approach to model the relationships between the species. This idea is inspired by computer science which makes great use of dependency graphs.

Complicated software applications typically use a wealth of lower level software routines. These routines have been developed, tested, and stored in modules for use by higher level applications. When this happens the application inherits the lower-level software and has a dependency on that module.

Such applications are written in human-readable languages such as Java. They then need to be translated into machine language. The compiler tool performs the translation, and the build tool assembles the result, along with the lower level routines, into an executable program. These tools use dependency graphs to model the software, essentially building a design diagram, or blueprint which shows the dependencies, specifying the different software modules that will be needed, and how they are connected together.

Dependency graphs also help with software design. Because they provide a blueprint of the software architecture, they are helpful in designing decoupled architectures and promoting software reuse.

Dependency graphs are also used by so-called “DevOps” teams to assist at deployment time in sequencing and installing the correct modules.

What Ewert has shown is that, as with computer applications which inherit software from a diverse range of lower-level modules, and those lower-level modules likewise feed into a diverse range of applications, biology’s genomes likewise reveal such patterns. Genomes may inherit molecular sequence information from a wide range of genetic modules, and genetic modules may feed into a diverse range of genomes.

Superficially, from a distance, this may appear as the traditional evolutionary tree. But that model has failed repeatedly as scientists have studied the characters of species more closely. Dependency graphs, on the other hand, provide a far superior model of the relationships between the species, and their genetic information flow.

Thursday, July 19, 2018

New Paper Demonstrates Superiority of Design Model

Ten Thousand Bits?

Did you know Mars is going backwards? For the past few weeks, and for several weeks to come, Mars is in its retrograde motion phase. If you chart its position each night against the background stars, you will see it pause, reverse direction, pause again, and then get going again in its normal direction. And did you further know that retrograde motion helped to cause a revolution? Two millennia ago, Aristotelian physics dictated that the Earth was at the center of the universe. Aristarchus’ heliocentric model, which put the Sun at the center, fell out of favor. But what Aristotle’s geocentrism failed to explain was retrograde motion. If the planets are revolving about the Earth, then why do they sometimes pause, and reverse direction? That problem fell to Ptolemy, and the lessons learned are still important today.

Ptolemy explained anomalies such as retrograde motion with additional mechanisms, such as epicycles, while maintaining the circular motion that, as everyone knew, must be the basis of all motion in the cosmos. With less than a hundred epicycles, he was able to model, and predict accurately the motions of the cosmos. But that accuracy came at a cost—a highly complicated model.

In the Middle Ages William of Occam pointed out that scientific theories ought to strive for simplicity, or parsimony. This may have been one of the factors that drove Copernicus to resurrect Aristarchus’ heliocentric model. Copernicus preserved the required circular motion, but by switching to a sun-centered model, he was able to reduce greatly the number of additional mechanisms, such as epicycles.

Both Ptolemy’s and Copernicus’ models accurately forecast celestial motion. But Copernicus was more parsimonious. A better model had been found.

Kepler proposed ellipses, and showed that the heliocentric model could become even simpler. It was not well accepted though because, as everyone knew, celestial bodies travel in circles. How foolish to think they would travel along elliptical paths. That next step toward greater parsimony would have to wait for the likes of Newton, who showed that Kepler’s ellipses were dictated by his new, highly parsimonious, physics. Newton described a simple, universal, gravitational law. Newton’s gravitational force would produce an acceleration, which could maintain orbital motion in the cosmos.

But was there really a gravitational force? It was proportional to the mass of the object which was then cancelled out to compute the acceleration. Why not have gravity cause an acceleration straightaway?

Centuries later Einstein reported on a man in Berlin who fell out of a window. The man didn’t feel anything until he hit the ground! Einstein removed the gravitational force and made the physics even simpler yet.

The point here is that the accuracy of a scientific theory, by itself, means very little. It must be considered along with parsimony. This lesson is important today in this age of Big Data. Analysts know that a model can always be made more accurate by adding more terms. But are those additional terms meaningful, or are they merely epicycles? It looks good to drive the modeling error down to zero by adding terms, but when used to make future forecasts, such models perform worse.

There is a very real penalty for adding terms and violating Occam’s Razor, and today advanced algorithms are available for weighing the tradeoff between model accuracy and model parsimony.

This brings us to common descent, a popular theory for modeling relationships between the species. As we have discussed many times here, common descent fails to model the species, and a great many additional mechanisms—biological epicycles—are required to fit the data.

And just as cosmology has seen a stream of ever improving models, the biological models can also improve. This week a very important model has been proposed in a new paper, authored by Winston Ewert, in the Bio-Complexity journal.

Inspired by computer software, Ewert’s approach models the species as sharing modules which are related by a dependency graph. This useful model in computer science also works well in modeling the species. To evaluate this hypothesis, Ewert uses three types of data, and evaluates how probable they are (accounting for parsimony as well as fit accuracy) using three models.

Ewert’s three types of data are: (i) Sample computer software, (ii) simulated species data generated from evolutionary / common descent computer algorithms, and (iii) actual, real species data.

Ewert’s three models are: (i) A null model in which entails no relationships between
any species, (ii) an evolutionary / common descent model, and (iii) a dependency graph model.

Ewert’s results are a Copernican Revolution moment. First, for the sample computer software data, not surprisingly the null model performed poorly. Computer software is highly organized, and there are relationships between different computer programs, and how they draw from foundational software libraries. But comparing the common descent and dependency graph models, the latter performs far better at modeling the software “species.” In other words, the design and development of computer software is far better described and modeled by a dependency graph than by a common descent tree.

Second, for the simulated species data generated with a common descent algorithm, it is not surprising that the common descent model was far superior to the dependency graph. That would be true by definition, and serves to validate Ewert’s approach. Common descent is the best model for the data generated by a common descent process.

Third, for the actual, real species data, the dependency graph model is astronomically superior compared to the common descent model.

Let me repeat that in case the point did not sink in. Where it counted, common descent failed compared to the dependency graph model. The other data types served as useful checks, but for the data that mattered—the actual, real, biological species data—the results were unambiguous.

Ewert amassed a total of nine massive genetic databases. In every single one, without exception, the dependency graph model surpassed common descent.

Darwin could never have even dreamt of a test on such a massive scale.

Darwin also could never have dreamt of the sheer magnitude of the failure of his theory. Because you see, Ewert’s results do not reveal two competitive models with one model edging out the other.

We are not talking about a few decimal points difference. For one of the data sets (HomoloGene), the dependency graph model was superior to common descent by a factor of 10,064. The comparison of the two models yielded a preference for the dependency graph model of greater than ten thousand.

Ten thousand is a big number.

But it gets worse, much worse.

Ewert used Bayesian model selection which compares the probability of the data set given the hypothetical models. In other words, given the model (dependency graph or common descent), what is the probability of this particular data set? Bayesian model selection compares the two models by dividing these two conditional probabilities. The so-called Bayes factor is the quotient yielded by this division.

The problem is that the common descent model is so incredibly inferior to the dependency graph model that the Bayes factor cannot be typed out. In other words, the probability of the data set given the dependency graph model, is so much greater than the probability of the data set given the common descent model, that we cannot type the quotient of their division.

Instead, Ewert reports the logarithm of the number. Remember logarithms? Remember how 2 really means 100, 3 means 1,000, and so forth?

Unbelievably, the 10,064 value is the logarithm (base value of 2) of the quotient! In other words, the probability of the data on the dependency graph model is so much greater than that given the common descent model, we need logarithms even to type it out. If you tried to type out the plain number, you would have to type a 1 followed by more than 3,000 zeros!

That’s the ratio of how probable the data are on these two models!

By using a base value of 2 in the logarithm we express the Bayes factor in bits. So the conditional probability for the dependency graph model has a 10,064 advantage of that of common descent.

10,064 bits is far, far from the range in which one might actually consider the lesser model. See, for example, the Bayes factor Wikipedia page, which explains that a Bayes factor of 3.3 bits provides “substantial” evidence for a model, 5.0 bits provides “strong” evidence, and 6.6 bits provides “decisive” evidence.

This is ridiculous. 6.6 bits is considered to provide “decisive” evidence, and when the dependency graph model case is compared to comment descent case, we get 10,064 bits.

But it gets worse.

The problem with all of this is that the Bayes factor of 10,064 bits for the HomoloGene data set is the very best case for common descent. For the other eight data sets, the Bayes factors range from 40,967 to 515,450.

In other words, while 6.6 bits would be considered to provide “decisive” evidence for the dependency graph model, the actual, real, biological data provide Bayes factors of 10,064 on up to 515,450.

We have known for a long time that common descent has failed hard. In Ewert’s new paper, we now have detailed, quantitative results demonstrating this. And Ewert provides a new model, with a far superior fit to the data.

Saturday, June 30, 2018

John Farrell Versus Isaac Newton

Guess Who Wins?

The title of John Farrell’s article in Commonweal from earlier this year is a dead giveaway. When writing about the interaction between faith and science, as Farrell does in the piece, the title “The Conflict Continues” is like a flashing red light that the mythological Warfare Thesis is coming at you.

Sure enough, Farrell does not disappoint. He informs his readers that the fear that science could “make God seem unnecessary” is “widespread today among religious believers,” particularly in the US where “opposition to belief in evolution remains very high.”

Indeed, this fear has “haunted the debate over the tension between religion and science for centuries.” Farrell notes that Edward Larson and Michael Ruse point out in their new book On Faith and Science, that the “conflict model doesn’t work so well. But that seems to be a minor speed bump for Farrell. He finds that:

The idea that the world operates according to its own laws and regularities remains controversial in the evolution debate today, as Intelligent Design proponents attack the consensus of science on Darwinian evolution and insist that God’s direct intervention in the history of life can be scientifically demonstrated.

Farrell also writes that Isaac Newton, driven by concerns about secondary causes, “insisted God was still necessary to occasionally tweak the motions of the planets if any threatened to wander off course.”

Farrell’s piece is riddled with myths. Secondary causes are not nearly as controversial as he would have us believe. He utterly mischaracterizes ID, and Newton said no such thing. It is true that Newton suggested that the Creator could intervene in the cosmos (not “insisted”).

And was this the result of some radical voluntarism?

Of course not. Newton suggested God may intervene in the cosmos because the physics of the day (which by the way he invented), indicated that our solar system could occasionally have instabilities. The fact that was running along just fine, and hadn’t yet blown up, suggested that something had intervened along the way.

Newton was arguing from science, not religion. But that doesn’t fit the Epicurean mythos that religion opposes naturalism while science confirms it. The reality is, of course, the exact opposite.

Sunday, May 20, 2018

New Paper Admits Failure of Evolution

Pop Quiz: Who Said It?

There are many fundamental problems with evolutionary theory. Origin of life studies have dramatically failed. Incredibly complex biological designs, both morphological and molecular, arose abruptly with far too little time to have evolved. The concept of punctuated equilibrium is descriptive, not explanatory. For example, the Cambrian Explosion is not explained by evolution and, in general, evolutionary mechanisms are inadequate to explain the emergence of new traits, body plans and new physiologies. Even a single gene is beyond the reach of evolutionary mechanisms. In fact, the complexity and sophistication of life cannot originate from non-biological matter under any scenario, over any expanse of space and time, however vast. On the other hand, the arch enemy of evolutionary theory, Lamarckian inheritance, in its variety of forms, is well established by the science.

Another Darwin’s God post?

No, these scientific observations are laid out in a new peer-reviewed, scientific paper.

Origin of Life

Regarding origin of life studies, which try to explain how living cells could somehow have arisen in an ancient, inorganic, Earth, the paper explains that this idea should have long since been rejected, but instead it has fueled “sophisticated conjectures with little or no evidential support.”

the dominant biological paradigm - abiogenesis in a primordial soup. The latter idea was developed at a time when the earliest living cells were considered to be exceedingly simple structures that could subsequently evolve in a Darwinian way. These ideas should of course have been critically examined and rejected after the discovery of the exceedingly complex molecular structures involved in proteins and in DNA. But this did not happen. Modern ideas of abiogenesis in hydrothermal vents or elsewhere on the primitive Earth have developed into sophisticated conjectures with little or no evidential support.

In fact, abiogenesis has “no empirical support.”

independent abiogenesis on the cosmologically diminutive scale of oceans, lakes or hydrothermal vents remains a hypothesis with no empirical support

One problem, of many, is that the early Earth would not have supported such monumental evolution to occur:

The conditions that would most likely to have prevailed near the impact-riddled Earth's surface 4.1–4.23 billion years ago were too hot even for simple organic molecules to survive let alone evolve into living complexity

In fact, the whole idea strains credibility “beyond the limit.”

The requirement now, on the basis of orthodox abiogenic thinking, is that an essentially instantaneous transformation of non-living organic matter to bacterial life occurs, an assumption we consider strains credibility of Earth-bound abiogenesis beyond the limit.

All laboratory experiments have ended in “dismal failure.” The information hurdle is of “superastronomical proportions” and simply could not have been overcome without a miracle.

The transformation of an ensemble of appropriately chosen biological monomers (e.g. amino acids, nucleotides) into a primitive living cell capable of further evolution appears to require overcoming an information hurdle of superastronomical proportions, an event that could not have happened within the time frame of the Earth except, we believe, as a miracle. All laboratory experiments attempting to simulate such an event have so far led to dismal failure.

Diversity of Life

But the origin of life is just the beginning of evolution’s problems. For science now suggests evolution is incapable of creating the diversity of life and all of its designs:

Before the extensive sequencing of DNA became available it would have been reasonable to speculate that random copying errors in a gene sequence could, over time, lead to the emergence of new traits, body plans and new physiologies that could explain the whole of evolution. However the data we have reviewed here challenge this point of view. It suggests that the Cambrian Explosion of multicellular life that occurred 0.54 billion years ago led to a sudden emergence of essentially all the genes that subsequently came to be rearranged into an exceedingly wide range of multi-celled life forms - Tardigrades, the Squid, Octopus, fruit flies, humans – to name but a few.

As one of the authors writes, “the complexity and sophistication of life cannot originate (from non-biological) matter under any scenario, over any expanse of space and time, however vast.” As an example, consider the octopus.

Octopus

First, the octopus is an example of novel, complex features, rapidly appearing and a vast array of genes without an apparent ancestry:

Its large brain and sophisticated nervous system, camera-like eyes, flexible bodies, instantaneous camouflage via the ability to switch colour and shape are just a few of the striking features that appear suddenly on the evolutionary scene. The transformative genes leading from the consensus ancestral Nautilus (e.g., Nautilus pompilius) to the common Cuttlefish (Sepia officinalis) to Squid (Loligo vulgaris) to the common Octopus (Octopus vulgaris) are not easily to be found in any pre-existing life form.

But it gets worse. As Darwin’s God has explained, The Cephalopods demonstrate a highly unique level of adenosine to inosine mRNA editing. It is yet another striking example of lineage-specific design that utterly contradicts macroevolution:

These data demonstrate extensive evolutionary conserved adenosine to inosine (A-to-I) mRNA editing sites in almost every single protein-coding gene in the behaviorally complex coleoid Cephalopods (Octopus in particular), but not in nautilus. This enormous qualitative difference in Cephalopod protein recoding A-to-I mRNA editing compared to nautilus and other invertebrate and vertebrate animals is striking. Thus in transcriptome-wide screens only 1–3% of Drosophila and human protein coding mRNAs harbour an A-to-I recoding site; and there only about 25 human mRNA messages which contain a conserved A-to-I recoding site across mammals. In Drosophila lineages there are about 65 conserved A-sites in protein coding genes and only a few identified in C. elegans which support the hypothesis that A-to-I RNA editing recoding is mostly either neutral, detrimental, or rarely adaptive. Yet in Squid and particularly Octopus it is the norm, with almost every protein coding gene having an evolutionary conserved A-to-I mRNA editing site isoform, resulting in a nonsynonymous amino acid change. This is a virtual qualitative jump in molecular genetic strategy in a supposed smooth and incremental evolutionary lineage - a type of sudden “great leap forward”. Unless all the new genes expressed in the squid/octopus lineages arose from simple mutations of existing genes in either the squid or in other organisms sharing the same habitat, there is surely no way by which this large qualitative transition in A-to-I mRNA editing can be explained by conventional neo-Darwinian processes, even if horizontal gene transfer is allowed. 

Lamarck

In the twentieth century Lamarckian Inheritance was an anathema for evolutionists. Careers were ruined and every evolutionist knew the inheritance of acquired characteristics sat right along the flat earth and geocentrism in the history of ideas. The damning of Lamarck, however, was driven by dogma rather than data, and today the evidence has finally overcome evolutionary theory.

Indeed there is much contemporary discussion, observations and critical analysis consistent with this position led by Corrado Spadafora, Yongsheng Liu, Denis Noble, John Mattick and others, that developments such as Lamarckian Inheritance processes (both direct DNA modifications and indirect, viz. epigenetic, transmissions) in evolutionary biology and adjacent fields now necessitate a complete revision of the standard neo-Darwinian theory of evolution or “New Synthesis " that emerged from the 1930s and 1940s.

Indeed, we now know of a “plethora of adaptive Lamarckian-like inheritance mechanisms.”

There is, of course, nothing new in this paper. We have discussed these, and many, many other refutations of evolutionary theory. Yet the paper is significant because it appears in a peer-reviewed journal. Science is, if anything, conservative. It doesn’t exactly “follow the data,” at least until it becomes OK to do so. There are careers and reputations at stake.

And of course, there is religion.

Religion drives science, and it matters.

Saturday, May 12, 2018

Centrobin Found to be Important in Sperm Development

Numerous, Successive, Slight Modifications

Proteins are a problem for theories of spontaneous origins for many reasons. They consist of dozens, or often hundreds, or even thousands of amino acids in a linear sequence, and while many different sequences will do the job, that number is tiny compared to the total number of sequences that are possible. It is a proverbial needle-in-the-haystack problem, far beyond the reach of blind searches. To make matters worse, many proteins are overlapping, with portions of their genes occupying the same region of DNA. The same set of mutations would have to result in not one, but two proteins, making the search problem that much more tricky. Furthermore, many proteins perform multiple functions. Random mutations somehow would have to find those very special proteins that can perform double duty in the cell. And finally, many proteins perform crucial roles within a complex environment. Without these proteins the cell sustains a significant fitness degradation. One protein that fits this description is centrobin, and now a new study shows it to be even more important than previously understood.

Centrobin is a massive protein of almost a thousand amino acids. Its importance in the division of animal cells has been known for more than ten years. An important player in animal cell division is the centrosome organelle which organizes the many microtubules—long tubes which are part of the cell’s cytoskeleton. Centrobin is one of the many proteins that helps the centrosome do its job. Centrobin depletion causes “strong disorganization of the microtubule network,” and impaired cell division.

Now, a new study shows just how important centrobin is in the development of the sperm tail. Without centrobin, the tail, or flagellum, development is “severely compromised.” And once the sperm is formed, centrobin is important for its structural integrity. As the paper concludes:

Our results underpin the multifunctional nature of [centrobin] that plays different roles in different cell types in Drosophila, and they identify [centrobin] as an essential component for C-tubule assembly and flagellum development in Drosophila spermatogenesis.

Clearly centrobin is an important protein. Without it such fundamental functions as cell division and organism reproduction are severely impaired.

And yet how did centrobin evolve?

Not only is centrobin a massive protein, but there are no obvious candidate intermediate structures. It is not as though we have that “long series of gradations in complexity” that Darwin called for:

Although the belief that an organ so perfect as the eye could have been formed by natural selection, is enough to stagger any one; yet in the case of any organ, if we know of a long series of gradations in complexity, each good for its possessor, then, under changing conditions of life, there is no logical impossibility in the acquirement of any conceivable degree of perfection through natural selection.

Unfortunately, in the case of centrobin, we do not know of such a series. In fact, centrobin would seem to be a perfectly good example of precisely how Darwin said his theory could be falsified:

If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case.  

Darwin could “find out no such case,” but he didn’t know about centrobin. Darwin required “a long series of gradations,” formed by “numerous, successive, slight modifications.”

With centrobin we are nowhere close to fulfilling these requirements. In other words, today’s science falsifies evolution. This, according to Darwin’s own words.

Religion drives science, and it matters.

Monday, April 30, 2018

Meet Jamie Jensen: What Are They Teaching at Brigham Young University?

Bacterial Resistance to Antibiotics

Rachel Gross’ recent article about evolutionist’s public outreach contains several misconceptions that are, unfortunately, all too common. Perhaps most obvious is the mythological Warfare Thesis that Gross and her evolutionary protagonists heavily rely on. Plumbing the depths of ignorance, Gross writes:

Those who research the topic call this paradigm the “conflict mode” because it pits religion and science against each other, with little room for discussion. And researchers are starting to realize that it does little to illuminate the science of evolution for those who need it most.

“Those who research the topic call this paradigm the ‘conflict mode’”?

Huh?

This is reminiscent of Judge Jones endorsement of Inherit the Wind as a primer for understanding the origins debate, for it is beyond embarrassing. Exactly who are those “who research the topic” to which Gross refers?

Gross is apparently blithely unaware that there are precisely zero such researchers. The “conflict mode” is a long-discarded, failed view of history promoted in Inherit the Wind, a two-dimensional, upside-down rewrite of the 1925 Monkey Trial.

But ever since, evolutionists have latched onto the play, and the mythological history it promotes, in an unabashed display of anti-intellectualism. As Lawrence Principe has explained:

The notion that there exists, and has always existed, a “warfare” or “conflict” between science and religion is so deeply ingrained in public thinking that it usually goes unquestioned. The idea was however largely the creation of two late nineteenth-century authors who confected it for personal and political purposes. Even though no serious historians of science acquiesce in it today, the myth remains powerful, and endlessly repeated, in wider circles

Or as Jeffrey Russell writes:

The reason for promoting both the specific lie about the sphericity of Earth and the general lie that religion and science are in natural and eternal conflict in Western society, is to defend Darwinism. The answer is really only slightly more complicated than that bald statement.

Rachel Gross is, unfortunately, promoting the “general lie” that historians have long since been warning of. Her article is utter nonsense. The worst of junk news.

But it gets worse.

Gross next approvingly quotes Brigham Young University associate professor Jamie Jensen whose goal is to inculcate her students with Epicureanism. “Acceptance is my goal,” says Jensen, referring to her teaching of spontaneous origins in her Biology 101 class at the Mormon institution.

As we have explained many times, this is how evolutionists think. Explaining their anti-scientific, religious beliefs is not enough. You must believe. As Jensen explains:

By the end of Biology 101, they can answer all the questions really well, but they don’t believe a word I say. If they don’t accept it as being real, then they’re not willing to make important decisions based on evolution — like whether or not to vaccinate their child or give them antibiotics.

Whether or not to give their child antibiotics?

As we have discussed many times before, the equating of “evolution” with bacterial resistance to antibiotics is an equivocation and bait-and-switch.

The notion that one must believe in evolution to understand bacterial resistance to antibiotics is beyond absurd.

It not only makes no sense; it masks the monumental empirical contradictions that bacterial antibiotic resistance presents to evolution. As a university life science professor, Jensen is of course well aware of these basic facts of biology.

And she gets paid to teach people’s children?

Religion drives science, and it matters.

Saturday, April 28, 2018

Rewrite the Textbooks (Again), Origin of Mitochondria Blown Up

There You Go Again

Why are evolutionists always wrong? And why are they always so sure of themselves? With the inexorable march of science, the predictions of evolution, which evolutionists were certain of, just keep on turning out false. This week’s failure is the much celebrated notion that the eukaryote’s power plant—the mitochondria—shares a common ancestor with the alphaproteobacteria. A long time ago, as the story goes, that bacterial common ancestor merged with an early eukaryote cell. And these two entities, as luck would have it, just happened to need each other. Evolution had just happened to create that early bacterium, and that early eukaryote, in such a way that they needed, and greatly benefited from, each other. And, as luck would have it again, these two entities worked together. The bacterium would just happen to produce the chemical energy needed by the eukaryote, and the eukaryote would just happen to provide needed supplies. It paved the way for multicellular life with all of its fantastic designs. There was only one problem: the story turned out to be false.

The story that mitochondria evolved from the alphaproteobacteria lineage has been told with great conviction. Consider the Michael Gray 2012 paper which boldly begins with the unambiguous truth claim that “Viewed through the lens of the genome it contains, the mitochondrion is of unquestioned bacterial ancestry, originating from within the bacterial phylum α-Proteobacteria (Alphaproteobacteria).

There was no question about it. Gray was following classic evolutionary thinking: similarities mandate common origin. That is the common descent model. Evolutionists say that once one looks at biology through the lens of common descent everything falls into place.

Except that it doesn’t.

Over and over evolutionists have to rewrite their theory. Similarities once thought to have arisen from a common ancestor turn out to contradict the common descent model. Evolutionists are left having to say the similarities must have arisen independently.

And big differences, once thought to show up only in distant species, keep on showing up in allied species.

Biology, it turns out, is full of one-offs, special cases, and anomalies. The evolutionary tree model doesn’t work.

Now, a new paper out this week has shown that the mitochondria and alphaproteobacteria don’t line up the way originally thought. That “unquestioned bacterial ancestry” turns out to be, err, wrong.

The paper finds that mitochondria did not evolve from the currently hypothesized alphaproteobacterial ancestor, or from “any other currently recognized alphaproteobacterial lineage.”

The paper does, however, make a rather startling claim. The authors write:

our analyses indicate that mitochondria evolved from a proteobacterial lineage that branched off before the divergence of all sampled alphaproteobacteria.

Mitochondria evolved from a proteobacterial lineage, predating the alphaproteobacteria?

That is a startling claim because, well, simply put there is no evidence for it. The lack of evidence is exceeded only by the evolutionist’s confidence. Note the wording: “indicate.”

The evolutionist’s analyses indicate this new truth.

How can the evolutionists be so sure of themselves in the absence of literally any evidence?

The answer is, because they are evolutionists. They are completely certain that evolution is true. And since evolution must be true, the mitochondria had to have evolved from somewhere. And the same is true for the alphaproteobacteria. They must have evolved from somewhere.

And in both cases, that somewhere must be the earlier proteobacterial lineage. There are no other good evolutionary candidates.

Fortunately this new claim cannot be tested (and therefore cannot be falsified), because the “proteobacterial lineage” is nothing more than an evolutionary construct. Evolutionists can search for possible extant species for hints of a common ancestor with the mitochondria, but failure to find anything can always be ascribed to extinction of the common ancestor.

This is where evolutionary theory often ends up: failures ultimately lead to unfalsifiable truth claims. Because heaven forbid we should question the theory itself.

Religion drives science, and it matters.

Tuesday, April 24, 2018

New Ideas on the Evolution of Photosynthesis Reaction Centers

Pure Junk

Evolutionists do not have a clear understanding of how photosynthesis arose, as evidenced by a new paper from Kevin Redding’s laboratory at Arizona State University which states that:

After the Type I/II split, an ancestor to photosystem I fixed its quinone sites and then heterodimerized to bind PsaC as a new subunit, as responses to rising O2 after the appearance of the oxygen-evolving complex in an ancestor of photosystem II. These pivotal events thus gave rise to the diversity that we observe today.

That may sound like hard science to the uninitiated, but it isn’t.

The Type I/II split is a hypothetical event for which the main evidence is the belief that evolution is true. In fact, according to the science, it is astronomically unlikely that photosynthesis evolved, period.

And so, in typical fashion, the paper presents a teleological (“and then structure X evolved to achieve Y”) narrative to cover over the absurdity:

and then heterodimerized to bind PsaC as a new subunit, as responses to rising O2 …

First, let’s reword that so it is a little clearer: The atmospheric oxygen levels rose and so therefore the reaction center of an early photosynthesis system heterodimerized in order to bind a new protein (which helps with electron transfer).

This is a good example of the Aristotelianism that pervades evolutionary thought. This is not science, at least in the modern sense. And as usual, the infinitive form (“to bind”) provides the telltale sign. In other words, a new structure evolved as a response to X (i.e., as a response to the rising oxygen levels) in order to achieve Y (i.e., to achieve the binding of a new protein, PsaC).

But it gets worse.

Note the term: “heterodimerized.” A protein machine that consists of two identical proteins mated together is referred to as a “homodimer.” If two different proteins are mated together it is a “heterodimer.” In some photosynthesis systems, at the core of the reaction center is a homodimer. More typically, it is a heterodimer.

The Redding paper states that the ancient photosynthesis system “heterodimerized.” In other words, it switched, or converted, the protein machine from a homodimer to a heterodimer (in order to bind PsaC). The suffix “ize,” in this case, means to cause to be or to become. The ancient photosynthesis system caused the protein machine to become a heterodimer.

Such teleology reflects evolutionary thought and let’s be clear—this is junk science. From a scientific perspective there is nothing redeeming here. It is pure junk.

But it gets worse.

These pivotal events thus gave rise to the diversity that we observe today.

Or as the press release described it:

Their [reaction centers’] first appearance and subsequent diversification has allowed photosynthesis to power the biosphere for over 3 billion years, in the process supporting the evolution of more complex life forms.

So evolution created photosynthesis which then, “gave rise to” the evolution of incredibly more advanced life forms. In other words, evolution climbed an astronomical entropic barrier and created incredibly unlikely structures which were crucial for the amazing evolutionary history to follow.

The serendipity is deafening.

Religion drives science, and it matters.

Wednesday, April 18, 2018

The Dinosaur “Explosion”

As Though They Were Planted There

In the famed Cambrian Explosion most of today’s animal phyla appeared abruptly in the geological strata. How could a process driven by blind, random mutations produce such a plethora of new species? Evolutionist Steve Jones has speculated that the Cambrian Explosion was caused by some crucial change in DNA. “Might a great burst of genetic creativity have driven a Cambrian Genesis and given birth to the modern world?” [1] What explanations such as this do not address is the problem of how evolution overcame such astronomical entropic barriers. Rolling a dice, no matter how creatively, is not going to design a spaceship.

The Cambrian Explosion is not the only example of the abrupt appearance of new forms in the fossil record, and the other examples are no less easy for evolution to explain. Nor has the old saw, that it’s the fossil record’s fault, fared well. There was once a time when evolutionists could appeal to gaps in the fossil record to explain why the species appear to arise abruptly, but no more. There has just been too much paleontology work, such as a new international study on dinosaurs published this week, confirming exactly what the strata have been showing all along: new forms really did arise abruptly.

The new study narrows the dating of the rise of dinosaurs in the fossil record. It confirms that many dinosaur species appeared in an “explosion” or what “we term the ‘dinosaur diversification event (DDE)’.” It was an “explosive increase in dinosaurian abundance in terrestrial ecosystems.” As the press release explains,

First there were no dinosaur tracks, and then there were many. This marks the moment of their explosion, and the rock successions in the Dolomites are well dated. Comparison with rock successions in Argentina and Brazil, here the first extensive skeletons of dinosaurs occur, show the explosion happened at the same time there as well.

As lead author Dr Massimo Bernardi at the University of Bristol explains, “it’s amazing how clear cut the change from ‘no dinosaurs’ to ‘all dinosaurs’ was.

There just isn’t enough time, and it is another example of a failed prediction of the theory of evolution.

1. Steve Jones, Darwin’s Ghost, p. 206, Random House, New York, 2000.

h/t: The genius.

Sunday, April 15, 2018

Andreas Wagner: Genetic Regulation Drives Evolutionary Change

A Hall of Mirrors

A new paper from Andreas Wagner and co-workers argues that a key and crucial driver of evolution is changes to the interaction between transcription factor proteins and the short DNA sequences to which they bind. In other words, evolution is driven by varying the regulation of protein expression (and a particular type of regulation—the transcription factor-DNA binding) rather than varying the structural proteins themselves. Nowhere does the paper address or even mention the scientific problems with this speculative idea. For example, if evolution primarily proceeds by random changes to transcription factor-DNA binding, creating all manner of biological designs and species, then from where did those transcription factors and DNA sequences come? The answer—that they evolved for some different, independent, function; itself an evolutionary impossibility—necessitates astronomical levels of serendipity. Evolution could not have had foreknowledge. It could not have known that the emerging transcription factors and DNA sequence would, just luckily, be only a mutation away from some new function. This serendipity problem has been escalating for years as evolutionary theory has repeatedly failed, and evolutionists have applied ever more complex hypotheses to try to explain the empirical evidence. Evolutionists have had to impute to evolution increasingly sophisticated, complex, higher-order, mechanisms. And with each one the theory has become ever more serendipitous. So it is not too surprising that evolutionists steer clear of the serendipity problem. Instead, they cite previous literature as a way of legitimizing evolutionary theory. Here I will show examples of how this works in the new Wagner paper.

The paper starts right off with the bold claim that “Changes in the regulation of gene expression need not be deleterious. They can also be adaptive and drive evolutionary change.” That is quite a statement. To support it the paper cites a classic 1975 paper by Mary-Claire King and A. C. Wilson entitled “Evolution at two levels in humans and chimpanzees.” The 1975 paper admits that the popular idea and expectation that evolution occurs by mutations in protein-coding genes had largely failed. The problem was that, at the genetic level, the two species were too similar:

The intriguing result, documented in this article, is that all the biochemical methods agree in showing that the genetic distance between humans and the chimpanzee is probably too small to account for their substantial organismal differences.

Their solution was to resort to a monumental shift in evolutionary theory: evolution would occur via the tweaking of gene regulation.

We suggest that evolutionary changes in anatomy and way of life are more often based on changes in the mechanisms controlling the expression of genes than on sequence changes in proteins. We therefore propose that regulatory mutations account for the major biological differences between humans and chimpanzees.

In other words, evolution would have to occur not by changing proteins, but by changing protein regulation. What was left unsaid was that highly complex, genetic regulation mechanisms would now have to be in place, a priori, in order for evolution to proceed.

Where did those come from?

Evolution would have to create highly complex, genetic regulation mechanisms so that evolution could occur.

Not only would this ushering in of serendipity to evolutionary theory go unnoticed, it would, incredibly, be cited thereafter as a sort of evidence, in its own right, showing that evolution occurs by changes to protein regulation.

But of course the 1975 King-Wilson paper showed no such thing. The paper presupposed the truth of evolution, and from there reasoned that evolution must have primarily occurred via changes to protein regulation. Not because anyone could see how that could occur, but because the old thinking—changes to proteins themselves—wasn’t working.

This was not, and is not, evidence that changes in the regulation of gene expression can be “adaptive and drive evolutionary change,” as the Wagner paper claimed.

But this is how the genre works. The evolution literature makes unfounded claims that contradict the science, and justifies those claims with references to other evolution papers which do the same thing. It is a web of deceit.

Ultimately it all traces back to the belief that evolution is true.

The Wagner paper next cites a 2007 paper that begins its very first sentence with this unfounded claim:

It has long been understood that morphological evolution occurs through alterations of embryonic development.

I didn’t know that. And again, references are provided. This time to a Stephen Jay Gould book and a textbook, neither of which demonstrate that “morphological evolution occurs through alterations of embryonic development.”

These sorts of high claims by evolutionists are ubiquitous in the literature, but they never turn out to be true. Citations are given, and those in turn provide yet more citations. And so on, in a seemingly infinite hall of mirrors, where monumental assertions are casually made and immediately followed by citations that simply do the same thing.

Religion drives science, and it matters.

Saturday, April 14, 2018

IC: We Can Say It, But You Can’t

Pre Adaptation

In contrast [to trait loss], the gain of genetically complex traits appears harder, in that it requires the deployment of multiple gene products in a coordinated spatial and temporal manner. Obviously, this is unlikely to happen in a single step, because it requires potentially numerous changes at multiple loci.

If you guessed this was written by an Intelligent Design advocate, such as Michael Behe describing irreducibly complex structures, you were wrong. It was evolutionist Sean Carroll and co-workers in a 2007 PNAS paper.

When a design person says it, it is heresy. When an evolutionist says it, it is the stuff of good solid scientific research.

The difference is the design person assumes a realist view (the genetically complex trait evinces design) whereas the evolutionist assumes an anti-realist view (in spite of all indications, the genetically complex trait must have arisen by blind causes).

To support their position, evolutionists often appeal to a pre adaptation argument. This argument claims that the various sub components (gene products, etc.), needed for the genetically complex trait, were each needed for some other function. Therefore, they evolved individually and independently, only later to serendipitously fit together perfectly and, in so doing, form a new structure with a new function that just happened to be needed. As Richard Dawkins once put it:

The bombardier beetle’s ancestors simply pressed into different service chemicals that already happened to be lying around. That’s often how evolution works.

The problem, of course, is that this is not realistic. To think that each and every one of the seemingly unending, thousands and thousands, of genetically complex traits just happened to luckily arise from parts that just happened to be lying around, is to make one’s theory dependent on too much serendipity.

Religion drives science, and it matters.