tag:blogger.com,1999:blog-3855268335402896473.post7445666888600982381..comments2024-01-23T02:32:28.567-08:00Comments on Darwin's God: Here is That Secret Gnosis Evolutionists HaveUnknownnoreply@blogger.comBlogger190125tag:blogger.com,1999:blog-3855268335402896473.post-48797144725270480732012-05-07T15:53:50.215-07:002012-05-07T15:53:50.215-07:00OK. So we agree that all the numerator is is the ...OK. So we agree that all the numerator is is the probability of the data, given the the hypothesis?<br /><br />That's fine. In that case there is no even apparent oxymoron. The numerator tells you absolutely nothing about the strength of the evidence for CA, and nor, in fact, does the likelihood ratio per se. It only does so IF p(CA)=1-p(SA) is a valid proposition.<br /><br />Which in fact, it is not. Not because it is a contrast, but because there is a substantial excluded middle.Elizabeth Liddlehttps://www.blogger.com/profile/02465414316063910821noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-32614119864319199852012-05-07T14:43:23.117-07:002012-05-07T14:43:23.117-07:00EL:
I can track it back, and what you keep referr...EL:<br /><br /><i>I can track it back, and what you keep referring to as "the probability for the CA case" is, in fact, the probability of the data given the CA case. These two things are simply not the same! Your "clarification" is just as way off the mark as your original was.</i><br /><br />Well we’ve traded many comments here so it may get confusing, but that was not my clarification. That was what I said many comments back, you pointed out it was clumsy, I agree and made this clarification:<br /><br /><i>I thought that was clear, but my summary was not. To be clear, “In other words, the worse the conditional probability of the evidence given CA, the better the case for CA, because the conditional probability of the evidence given SA got even worse yet.”</i><br /><br /><br /><i> So how do you get "probability on CA" from p(data|CA)? What do those words even mean, in plain English?</i><br /><br />Again, you seem to be consistently going several comments back and sticking to a comment I made which I clarified several times.Cornelius Hunterhttps://www.blogger.com/profile/12283098537456505707noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-51857650832481027382012-05-07T14:32:42.236-07:002012-05-07T14:32:42.236-07:00CH: No, the example comes from evolutionists.
I m...CH: <i>No, the example comes from evolutionists.</i><br /><br />I meant it was the one you brought up.<br />*sigh*<br /><br />CH: <i>You seem to be working hard to avoid what I said, even after I clarified it.</i><br /><br />No I am not, Cornelius, and if I was a less tolerant woman, I'd be offended at the suggestion. I am working extremely hard to understand what you have said, but fortunately, because you give the a reference to the actual math term, I can track it back, and what you keep referring to as "the probability for the CA case" is, in fact, the probability of the data given the CA case. These two things are simply not the same!<br /><br />Your "clarification" is just as way off the mark as your original was.<br /><br />CH: <i>The use of contrastive reasoning which in this case is metaphysical.</i><br /><br />It's not that the case can be made to look powerful by contrastive reasoning, Cornelius. All hypotheses are tested by contrastive reasoning. And by this stage I simply have no clue what you mean by "metaphysical" in this context. There is nothing "metaphysical" about hypothesis testing, whether Bayesian or Fisherian. You just need to be very careful how you are constructing your null. Dembski isn't. The rest of us usually are.<br /><br />CH: <i>Finding those designs that are disadvantageous and so have low probability on CA. The lower the probability on CA, the better, because, the SA model that is used gives an even lower probability of data. That is, as you move to designs with lower probabilities on CA, the probabilities on SA decrease even faster.</i><br /><br />Cornelius, I went to great lengths to show you that the more deleterious the mutation, the lower the probability of CA|data. Did you not follow my math? Did you find an error? And you keep using impossible-to-parse terms like "the probability on CA". I don't know what that means, and when I track it back, it usually turns out to be p(data|CA). <br /><br />CH:<i>It’s all metaphysical. The non metaphysical result is that the evidence has low probability on CA.</i><br /><br />I have no idea what you mean by either of theses sentences. <br /><br />CH:<i>Likelihoodism is usually contrasted with Bayesianism.</i><br /><br />Yes, but what you are calling the likelihood and I am calling the odds ratio is only one simple arithmetical step away from the posterior probability, if we set the priors for both hypothesis at .5. The issue is what it actually means.<br /><br />A likelihood ratio is not a probability measure, and the numerator and denominator are not probabilities for either hypothesis.<br /><br />CH:<i>Well you’ve said this repeatedly without actually addressing what I said, and in particular my clarifications. You seem to be repeating this over and over, even though I have clarified more than once now.</i><br /><br />That's because your clarifications are not clear! <br /><br />I take it that you agree that the numerator of your likelihood ratio is p(data|CA) and your denominator is p(data|SA)?<br /><br />So how do you get "probability on CA" from p(data|CA)? What do those words even mean, in plain English?<br /><br />CH:<i>It is not the conclusion that I don’t like. It is the misrepresentation. The selective use of data and non scientific premises is not within science, it is metaphysics. Nothing wrong with metaphysics, just don’t call it science.</i><br /><br />Of course "selective use of data" is not science. I have never come across any usage of "metaphysics" that would make it that either. It's just bad science.<br /><br />It is not what is being done here. Or, at least, you have not made the case that it is.Elizabeth Liddlehttps://www.blogger.com/profile/02465414316063910821noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-66913118643274861092012-05-07T11:13:42.611-07:002012-05-07T11:13:42.611-07:00Continued …
Yes, but it was your example, not min...Continued …<br /><br /><i>Yes, but it was your example, not mine!</i><br /><br />No, the example comes from evolutionists.<br /><br /><br /><i>You seemed to think that what was wrong with it was that the "probability of the CA case" was still tiny after doing the Bayesian computation. It isn't.</i><br /><br />You seem to be working hard to avoid what I said, even after I clarified it.<br /><br /><br /><i> In other words you are reaching a correct conclusion (that contrastive reasoning is the "reason CA looks good") but that is not because the case for CA is still very tiny.</i><br /><br />Again, you seem to be working hard to avoid what I said. The bottom line is this: <br /><br />1. We’re looking at making the case for CA. We’re looking at the likelihood ratio and at the posterior probability but, either way, we find the case for CA can be made to look extremely powerful. This is achieved by,<br /><br />2. The use of contrastive reasoning which in this case is metaphysical.<br /><br />3. Finding those designs that are disadvantageous and so have low probability on CA. The lower the probability on CA, the better, because, the SA model that is used gives an even lower probability of data. That is, as you move to designs with lower probabilities on CA, the probabilities on SA decrease even faster.<br /><br />It’s all metaphysical. The non metaphysical result is that the evidence has low probability on CA.<br /><br /><br /><i>Both examples are Bayesian.</i><br /><br />Likelihoodism is usually contrasted with Bayesianism.<br /><br /><br /><i>as long as you don't then interpret the numerator or denominator incorrectly, which I think you have done.</i><br /><br />Well you’ve said this repeatedly without actually addressing what I said, and in particular my clarifications. You seem to be repeating this over and over, even though I have clarified more than once now.<br /><br /><br /><i> The only issue is whether we assume p(CA)=1-p(SA) or not. … All that matters is that we set up the pair of contrasting hypotheses in such a way that one is everything except the other.</i><br /><br />Only issue? That’s metaphysical.<br /><br /><br /><i>For example, to take Sober's example, the reason it reaches a conclusion you don't like</i><br /><br />It is not the conclusion that I don’t like. It is the misrepresentation. The selective use of data and non scientific premises is not within science, it is metaphysics. Nothing wrong with metaphysics, just don’t call it science.Cornelius Hunterhttps://www.blogger.com/profile/12283098537456505707noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-48339050464185115042012-05-07T11:12:37.451-07:002012-05-07T11:12:37.451-07:00EL:
What is the difference between these two? The...EL:<br /><br /><i>What is the difference between these two? The second just seems like a clumsier way of expressing the first.</i><br /><br />Not sure how to respond since I’ve already explained it, such as in the following:<br /><br /><i>Right, this is the abbreviation I pointed out earlier. I was referring to the evolutionary argument from the likelihood ratio, where the numerator and denominator refers to the probability of the evidence given CA and SA, respectively.<br /><br />I thought that was clear, but my summary was not. To be clear, “In other words, the worse the conditional probability of the evidence given CA, the better the case for CA, because the conditional probability of the evidence given SA got even worse yet.”</i><br /><br />I thought that was clear.<br /><br /><br /><i> What you are calling "the probability for the CA case" is in fact (check out the math) "the probability of the data, given CA". This is not an estimate of the probability that CA is true</i><br /><br />Right agreed. I thought I was clear about that above. It appears you are not reading what I wrote, but just repeating the same criticism for effect.<br /><br /><br /><i>But you aren't interested in the probability of the evidence, any more than you are interested in the probability of the hand of cards you are dealt after you've dealt it. Once it's dealt, it's there - you can ignore all the alternative universes in which you were dealt a different hand.</i><br /><br />Well that’s a telling statement. No one is talking about alternative universes. The problem here is that the evidence has low probability on CA. You can’t just dismiss the low probability of the data on CA, and say it doesn’t matter. Of course it matters. And it’s the one thing that doesn’t hinge on metaphysics.<br /><br />You can’t dismiss the low probability simply because “we’re not interested in that.” Nor can you dismiss it because you think your calculated posterior probability takes precedence, and is more important. That posterior is metaphysical whereas the probability of the data at least derives from a hypothesis.<br /><br /><br /><i> what we are interested in what that means for our alternative hypotheses. And what it means is that the probability of one being true hugely increases, and the other decreases. And as p(CA)=1-p(SA) as the first approaches 1 the second necessarily approaches 0. You do not end up with two small probabilities in which one is nonetheless a lot bigger than the other. They sum to 1.</i><br /><br />This is all metaphysics. Science has no way of knowing this.<br /><br />Continued …Cornelius Hunterhttps://www.blogger.com/profile/12283098537456505707noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-65742972110353228932012-05-07T04:27:46.818-07:002012-05-07T04:27:46.818-07:00PS: actually, I realise Sober is even closer to do...PS: actually, I realise Sober is even closer to doing what I did than I thought. He's basically assumed that the priors for each hypothesis are equal so they cancel out. So all you need to do to convert his odds ratio into my posterior probability is to add the numerator to the denominator.Elizabeth Liddlehttps://www.blogger.com/profile/02465414316063910821noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-51975157556567501162012-05-07T03:20:41.404-07:002012-05-07T03:20:41.404-07:00And this precisely where Dembski goes wrong. He u...And this precisely where Dembski goes wrong. He uses "contrastive reasoning" which is just fine as long as you interpret your conclusions appropriately.<br /><br />And just as concluding that "CA" is highly probable is only as good as your case that "CA" is the only possible explanation for non-independence (which we know it is not - HGT is another possibility, for a start), so Dembski's conclusion that "ID" is highly probable is only as good as his case that "ID" is the only possible explanation for patterns that he regards as "specified". <br /><br />In other words you have to be really careful about your remainders - what have you ruled out? <br /><br />In Fisherian approaches this means carefully specifying your null (this is where Dembski makes his error). In Bayesian approaches, it means being very careful to specify exactly what each of your alternative hypotheses predicts.<br /><br />And if one of your alternative hypotheses is ID, you are a bit stuck.Elizabeth Liddlehttps://www.blogger.com/profile/02465414316063910821noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-72935282146450095062012-05-07T02:58:04.200-07:002012-05-07T02:58:04.200-07:00CH:Even though the conditional probability of the ...CH:<i>Even though the conditional probability of the evidence given CA get worse, the likelihood approach makes CA look good by virtue of the contrastive reasoning. </i><br /><br />No. You are still not understanding the Bayesian probability calculation. The absolute magnitude of the probability of the evidence given CA is irrelevant <i>once you have the evidence</i>. It doesn't "get worse". There is no "worse" about it. All that matters is the <i>relative</i> probability of the data, given each hypothesis, and neither of these probabilities are the probability of the "case for" the respective hypothesis.<br /><br />Yes, "CA" looks good "by virtue of contrastive reasoning" but not because although p(CA|data) is small, p(SA|data) is "worse". Neither of those are probabilities "for" either case.<br /><br />In other words you are reaching a correct conclusion (that contrastive reasoning is the "reason CA looks good") but that is not because the case for CA is still very tiny.<br /><br />CH:<i>The same is true in your Bayesian example, except now you can actually have two numbers (the probabilities of CA and SA given the evidence) instead of one (the ratio of the conditional probabilities of the evidence given CA and SA).</i><br /><br />Both examples are Bayesian. I have done exactly what Sober has done, only I have gone further and actually computed the posterior probability for CA, rather than simply computed the odds ratio for the p(data|CA) and p(data|SA). I did this to show you that neither of these two quantities are the probability "for the CA/SA" case. Clearly, the odds ratio will tell you which way the answer is going to go, so it is still worth looking at (and saves you putting a prior on either hypothesis, which is, in fact important, but that's a different issue), as long as you don't then interpret the numerator or denominator incorrectly, which I think you have done.<br /><br />CH: <i>But you’re not getting around the problem. Nothing has changed. Whether in the likelihood approach or in your Bayesian example, the worse the probability of the evidence the better CA looks (either in the form of a ratio or in the form of a probability). In both cases this is underwritten by the contrastive reasoning, which you have kindly supplied in the form of a formula: p(CA)=1-p(SA).</i><br /><br />What I have done is to show you that you have reached a reasonable conclusion by an erroneous piece of reasoning! The only issue is whether we assume p(CA)=1-p(SA) or not.<br /><br />It doesn't matter whether we use a Bayesian approach, as Sober does, or a Fisherian approach, as Dembski does. All that matters is that we set up the pair of contrasting hypotheses in such a way that one is everything except the other.<br /><br />For example, to take Sober's example, the reason it reaches a conclusion you don't like, is that it has assumed that under Separate Ancestry, the only possible explanation for M appearing in both populations is that it appeared spontaneously in both (with probability = p).<br /><br />This is not necessarily a safe assumption. So we could do exactly the same math, but this time, we will relabel our hypotheses. Now "SA" becomes "IM" for "Independent Mutations" and "CA" becomes "NM" for "non-independent mutations". p(NM)=1-p(IM).<br /><br />You do exactly the same math, and you end up with an increased posterior probabilty of NM. However, now, instead of concluding that "CA" is overwhelmingly supported, we just as validly conclude that "NM" is overwhelmingly supported.<br /><br />The only difference is our interpretation - we now conclude that something or other in the ancestry of both was common, but that common factor wasn't necessarily a common ancestor, but, possibly, a common designer.<br /><br />Final installment below:Elizabeth Liddlehttps://www.blogger.com/profile/02465414316063910821noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-86590465779557572232012-05-07T02:46:49.902-07:002012-05-07T02:46:49.902-07:00CH: I agree with your Bayesian example, but in any...CH: <i>I agree with your Bayesian example, but in any case, in both approaches, the probability of the evidence given CA becomes worse. </i><br /><br />Well "worse" is an unnecessary pejorative. Yes, the probability of the data, given either hypothesis is clearly lower as p decreases. But you aren't interested in the probability of the evidence, any more than you are interested in the probability of the hand of cards you are dealt after you've dealt it. Once it's dealt, it's there - you can ignore all the alternative universes in which you were dealt a different hand.<br /><br />And <i>given</i> the data (a tautology, as "data" actually means: "what are given"), what we are interested in what that means for our alternative hypotheses. And what it means is that the probability of one being true hugely increases, and the other decreases. And as p(CA)=1-p(SA) as the first approaches 1 the second necessarily approaches 0. You do not end up with two small probabilities in which one is nonetheless a lot bigger than the other. They sum to 1.<br /><br />CH:<i>But that’s the point. In fact, that is a curious way of putting it because that is the reason why your Bayesian example works.</i><br /><br />Yes, but it was <i>your</i> example, not mine! You seemed to think that what was wrong with it was that the "probability of the CA case" was still tiny after doing the Bayesian computation. It isn't. <br /><br />What is wrong isn't that the probability of the data, given CA is still very small (if p is small) and therefore the probability of the CA case still very small (it isn't) but that we have no excluded middle.<br /><br />more belowElizabeth Liddlehttps://www.blogger.com/profile/02465414316063910821noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-16652015190107615182012-05-07T02:07:38.814-07:002012-05-07T02:07:38.814-07:00CH: It isn’t the probability *of* CA, but it is th...CH: <i>It isn’t the probability *of* CA, but it is the probability for the CA case. My abbreviation was too abbreviated.</i><br /><br />What is the difference between these two? The second just seems like a clumsier way of expressing the first.<br /><br />What the probability you are describing as "the probability for the CA case" actually <i>is</i> (according to the math), is the probability of the data, given CA.<br /><br />If that is what you are calling "the probability for the CA case" then, that is still extremely confusing - not to say misleading (though I do not accuse you of being deliberately misleading).<br /><br />It is certainly not the probability that CA is true, which is what it sounds like. And neither is it the probability that CA is true, given the data, which is what we actually want to know.<br /><br />To take your own example: we want to know "the probability that this essay is plagiarized, given the data", not "the probability of these data, given plagiarism".<br /><br />We know that if "these data" are the words "Participants were a volunteer sample of psychology students who received course credits for their participation" then, because the probability of these words appearing in a report are very high, whether or not the report has been plagiarised, then the probability of these data, given plagiarism are also very high, but the <i>case</i> for plagiarism low.<br /><br />However if the words are a verbatim transcript of an unusual claim made in an obscure published paper, then the probability that they will appear at all is very low, as is the probability that they will appear, given plagiarism.<br /><br />But this of course makes the <i>case</i> for plagiarism stronger, not weaker, because it increases the probability of <i>plagiarism, given the data</i>.<br /><br />To summarise:<br /><br />What you are calling "the probability for the CA case" is in fact (check out the math) "the probability of the data, given CA".<br /><br />This is <i>not</i> an estimate of the probability that CA is true (which seems to me what the words "the probability for the CA case" would most naturally mean). What <i>is</i> an estimate of the probability that CA is true is the posterior probability of CA, given the data.<br /><br />Which increases as p reduces.<br /><br />More below.Elizabeth Liddlehttps://www.blogger.com/profile/02465414316063910821noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-55270939978427851812012-05-06T16:45:01.458-07:002012-05-06T16:45:01.458-07:00EL:
doesn't make sense. The numerator of the ...EL:<br /><br /><i>doesn't make sense. The numerator of the conditional probabilities used in in the likelihood ratio isn't "the probability for CA"; it is the probability of the data, given CA.</i><br /><br />It isn’t the probability *of* CA, but it is the probability for the CA case. My abbreviation was too abbreviated.<br /><br /><br /><i>CH:In other words, the worse the probability for CA, the better the case for CA, because the SA probability got even worse yet.<br /><br />You have, I think, confused the probability of the data, given CA, with the probability of CA, given the data!</i><br /><br />Right, this is the abbreviation I pointed out earlier. I was referring to the evolutionary argument from the likelihood ratio, where the numerator and denominator refers to the probability of the evidence given CA and SA, respectively.<br /><br />I thought that was clear, but my summary was not. To be clear, “In other words, the worse the conditional probability of the evidence given CA, the better the case for CA, because the conditional probability of the evidence given SA got even worse yet.”<br /><br />I agree with your Bayesian example, but in any case, in both approaches, the probability of the evidence given CA becomes worse. <br /><br /><br /><i>The reason the contrastive hypothesis works is simply because p(CA)=1-p(SA). That means that your inference is only as good as that assumption.</i><br /><br />But that’s the point. In fact, that is a curious way of putting it because that is the reason why your Bayesian example works.<br /><br />Even though the conditional probability of the evidence given CA get worse, the likelihood approach makes CA look good by virtue of the contrastive reasoning. The same is true in your Bayesian example, except now you can actually have two numbers (the probabilities of CA and SA given the evidence) instead of one (the ratio of the conditional probabilities of the evidence given CA and SA).<br /><br />But you’re not getting around the problem. Nothing has changed. Whether in the likelihood approach or in your Bayesian example, the worse the probability of the evidence the better CA looks (either in the form of a ratio or in the form of a probability). In both cases this is underwritten by the contrastive reasoning, which you have kindly supplied in the form of a formula: p(CA)=1-p(SA).Cornelius Hunterhttps://www.blogger.com/profile/12283098537456505707noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-45474388414519132862012-05-06T12:48:21.390-07:002012-05-06T12:48:21.390-07:00But that is still not the probability of CA, given...But that is still not the probability of CA, given the data! And so this:<br /><br /><i>In other words, the worse the probability for CA, the better the case for CA, because the SA probability got even worse yet.</i><br /><br />doesn't make sense. The numerator of the conditional probabilities used in in the likelihood ratio isn't "the probability for CA"; it is the probability of the <i>data</i>, given CA.<br /><br />Now you can compute the ratio between this and the probability of the <i>data</i>, given SA if you want, but it does not tell you anything about the probability of CA being true.<br /><br />What you want is the posterior probability of CA being true. Which you can do using Bayes theorem and the answer is that the smaller p is the larger the posterior probability for CA. In my example, it rises from a prior of.5 to something near 1.Elizabeth Liddlehttps://www.blogger.com/profile/02465414316063910821noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-11117927915890800962012-05-06T10:31:09.644-07:002012-05-06T10:31:09.644-07:00EL:
Quick response to clarify an item:
if by &qu...EL:<br /><br />Quick response to clarify an item:<br /><br /><i>if by "the conditional probability for CA" you mean the posterior probability of CA, given the data (and “data” literally means “what are given”).</i><br /><br />No, sorry, I didn't type that out clearly. By "conditional probability for CA" I was referring to the conditional probabilities used in the likelihood ratio. In this case, that would be the numerator, the probability of the observation that both populations have T=1, given CA.Cornelius Hunterhttps://www.blogger.com/profile/12283098537456505707noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-77786193887277086102012-05-06T06:54:32.692-07:002012-05-06T06:54:32.692-07:00Let the probability of the observed sequence (let...Let the probability of the observed sequence (let’s call it M) occurring spontaneously in any individual, be P.<br /> <br />And let's approximate p(0-> 1) to P^N, where N is the number of generations between the putative ancestral population and the present population. Now let Q be the probability that a parent will pass on a the mutation. If Q is near 1, the mutation will be near neutral. If Q is near 0, the mutation will be highly deleterious. So we can alter either of those parameters and see what happens.<br /><br />We have two hypothesis, CA and SA. These are mutually exclusive, so p(CA)=1-p(SA). <br /><br />Let’s set our prior for CA at .5, so each hypothesis has equal prior probability. What we want to know is the posterior probability of CA, having observed the same sequence M in both populations. I’ll call that MB (M in Both).<br /><br />According to Bayes’ theorem:<br />p(CA|MB)=p(MB|CA*p(CA)/(p(MB|CA)*p(CA)+p(MB|SA)*p(SA).<br /><br />As you say, <br />p(MB|CA)= (1-P)*Prob(0-->1)^2 + (P)*Prob(1-->1)^2<br /><br />and <br />p(MB|SA)= [ (1-P)*Prob(0-->1) + (P)*Prob(1-->1) ]^2<br /><br />So now we can plug in some numbers. If P=.0000001, and Q=.9, and N=100, <br /><br />P(0-->1) = 0.0000100<br />p(1-->1) = 0.3660387<br /><br />This gives us:<br /><br />p(B|CA) =0.0000000135 <br />p(B|SA) =0.0000000001 <br /><br />And our posterior probability for CA, given MB is .993 (by my calcs, using Excel).<br /><br />If we reduce P to .00000001, <br /><br />P(CA|MB)increases to .999, and so on.<br /><br />However, if I instead reduce Q to .9 (making T=1 more deleterious), P(CA|MB) drops to only just over .5, and the smaller Q becomes, the nearer to .5 p(CA|MB) remains.<br /><br />However, the main point holds: the less likely M is to occur spontaneously, the greater the posterior probability of CA. <br />This is why your plagiarism example is correct (and why my paternity example is also correct!) <br /><br />But your interpretation is quite wrong. The probability of your hypothesis being correct, given the data, is <i>not </i>reduced because the probability of your data, given your hypothesis, is low. It is <i> increased</i>, as you will see if you plug my math into Excel. There is nothing “post-modern” about this. Even if you put in a very low prior for CA, your posterior will still be greater than your prior. <br /><br />The reason the contrastive hypothesis works is simply because p(CA)=1-p(SA). That means that your inference is only as good as that assumption.<br /><br />Which may not be good. Dembski uses exactly the same reasoning to infer ID, but unfortunately violates that assumption. He assumes that p(chance)=1-p(design). But he computes p(chance) in a way that is too narrow, and thus has a large excluded middle.<br /><br />Now, if you want to argue that common-ancestry and separate ancestry are not mutually exclusive, that’s fine! But that’s not the argument against “contrastive” reasoning you’ve made by invoking Sober.<br /><br />There is one, and I’ve made it against Dembski. So, the floor is yours :)Elizabeth Liddlehttps://www.blogger.com/profile/02465414316063910821noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-1482044466407567752012-05-06T06:47:55.664-07:002012-05-06T06:47:55.664-07:00I was considering your original quotation, in whic...I was considering your original quotation, in which Sober sets p(1-->1) = 1, which will not be the case if T=1 is disadvantageous.<br /> <br />But the more complex case, in which p(1-->1) is small, as in the case of a disadvantageous mutation, still holds,though less strongly. <br /><br />Sober's point remains valid whether or not the novel sequence in question is deleterious or not, namely, that if you see the same rare thing twice, the probability that they have a common cause becomes greater than the probability that they do not.<br /><br />As you point out in your plagiarism example.<br /><br />However, if T=1 is deleterious, observing T will actually contribute less to the case for CA than it would if T=1 was neutral or advantageous, as I will show below. But that doesn’t matter, because the argument is that the lower p is, whether or not T=1 is deleterious, the better the case for CA.<br /><br />CH:<i>So you can see that according to the likelihood ratio, the argument for CA strengthens as p becomes smaller. And as p becomes smaller, the conditional probability for CA also becomes smaller.</i><br /><br />No, it doesn’t. It becomes considerably larger. Or at least, if by "the conditional probability for CA" you mean the posterior probability of CA, given the data (and “data” literally means “what are given”). IN other words, if you mean the conditional probability of the data on CA, then sure. But then your statement below does not follow.<br /><br />CH:<i>In other words, the worse the probability for CA, the better the case for CA, because the SA probability got even worse yet.</i><br /><br />You have, I think, confused the probability of the data, given CA, with the probability of CA, given the data!<br /><br />Let’s do a worked example:<br /><br />(continued below)Elizabeth Liddlehttps://www.blogger.com/profile/02465414316063910821noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-71726121870803512632012-05-05T13:58:23.193-07:002012-05-05T13:58:23.193-07:00EL:
I think you read too imaginatively!
No, I di...EL:<br /><br /><i>I think you read too imaginatively!</i><br /><br />No, I did not read imaginatively.<br /><br /><br /><i>Let's take a really really simple example. We have a paternity case …</i><br /><br />Your paternity case example does not serve you well because it is not analogous. In the paternity case you have the merging of two genomes, like a “Y”. Both hypotheses (the man, or the man’s friend) are a “Y”.<br /><br /><br /><i>Exactly the same is true in Sober's example …</i><br /><br />No, in the common ancestry (CA) case Sober examines, you have the splitting of a population into two species, like an upside down “Y”. Then for the other hypothesis, you have separate ancestry (SA), which is simply two separate vertical lines “| |”.<br /><br />I think this will become more obvious if I just explain what Sober is pointing out. On page 300 he shows the FIS-DIS (frequency independent, disadvantageous) likelihood ratio formula. It’s very simple: the probability that species X and Y share trait T given common ancestry and frequency independent selection against T is divided by the probability that species X and Y share trait T given separate ancestry and, again, frequency independent selection against T.<br /><br />When trait T is in state 0, it is advantageous. When it is in state 1, it is disadvantageous.<br /><br />On page 301 Sober develops this in his Equation (G). Again, it is straightforward. For the CA case, you consider two possibilities: the ancestral population has T=0 or it has T=1. You let p be the probability T=1:<br /><br />p = Prob(T=1)<br /><br />So now you consider the situation where both observed, extant, populations have T=1. In the common ancestry (CA) case, your common ancestor might have T=0, or T=1. For the T=0 possibility, you have the probability of both observed, extant, populations have T=1 as:<br /><br />(1-p)*Prob(0-->1)^2<br /><br />And for the T=1 possibility, you have the probability of both observed, extant, populations have T=1 as:<br /><br />(p)*Prob(1-->1)^2<br /><br />So the numerator of the likelihood ratio, that is the probability that both observed, extant, populations have T=1 given CA, is simply the sum of these two probabilities:<br /><br />(1-p)*Prob(0-->1)^2 + (p)*Prob(1-->1)^2<br /><br />Now for the separate ancestry (SA) case. Here, to obtain the probability that both observed, extant, populations have T=1 given SA, you just compute the probability that a single extant population has T=1, and then square it.<br /><br />So again, for the possibility that the ancestor has T=0, you have the probability of the observed, extant, population has T=1 as:<br /><br />(1-p)*Prob(0-->1)<br /><br />And for the T=1 possibility, you have:<br /><br />(p)*Prob(1-->1)<br /><br />The denominator of the likelihood ratio, that is the probability that both observed, extant, populations have T=1 given SA, is then the sum of these two probabilities squared:<br /><br />[ (1-p)*Prob(0-->1) + (p)*Prob(1-->1) ]^2<br /><br />Now given that T=1 is disadvantageous, Sober simplifies this a bit in his (FIS-DIS) equation on page 303, by letting Prob(0-->1) = 0. That is, any transition to a disadvantageous state is small, so just let it be zero.<br /><br />So you can do the math, to see that the likelihood ratio (ie, the ratio of the two conditional probabilities above), with Prob(0-->1) = 0, is simply:<br /><br />1/p.<br /><br />So you can see that according to the likelihood ratio, the argument for CA strengthens as p becomes smaller. And as p becomes smaller, the conditional probability for CA also becomes smaller.<br /><br />In other words, the worse the probability for CA, the better the case for CA, because the SA probability got even worse yet.<br /><br />By the way, if you’re looking for analogies, a better one, which Sober uses in his paper “Did Darwin write the Origin backwards?” is plagiarism, where homework assignments from two students are compared. This again, makes the point clear. The lower the probability of the shared the similarity, the stronger the case for plagiarism.Cornelius Hunterhttps://www.blogger.com/profile/12283098537456505707noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-8564481901148828972012-05-04T11:23:32.609-07:002012-05-04T11:23:32.609-07:00In that case your issue seems simply to be that ev...In that case your issue seems simply to be that evolutionist are pretty sure we are on the right lines.<br /><br />Well we are :)<br /><br />I look forward to your response re Sober.<br /><br />No rush, have to go and rehearse a Brandenburg!Elizabeth Liddlehttps://www.blogger.com/profile/02465414316063910821noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-76105363867772869082012-05-04T10:57:07.581-07:002012-05-04T10:57:07.581-07:00EL:
"You have demonstrated that more people ...EL:<br /><br />"You have demonstrated that more people say it, and in less nuanced ways, than I had been aware, but in none of your examples do I detect any sense that it is a metaphysical statement: Evolution definitely occurred this way, and no other."<br /><br />Oh, I don't say that is the reason it is metaphysical. Sorry, you got the wrong impression. If one states "I'm pretty sure a benevolent creator would not have made this universe" that is a metaphysical claim, notwithstanding the uncertainty. One need not be making a Prob=1 claim to be making a metaphysical claim.Cornelius Hunterhttps://www.blogger.com/profile/12283098537456505707noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-50731934928438724612012-05-04T10:52:52.787-07:002012-05-04T10:52:52.787-07:00EL:
"Cornelius, I note that you have complet...EL:<br /><br />"Cornelius, I note that you have completely failed to address my point re Sober!"<br /><br />Sorry, that was merely a quick response. I will respond to your greater points a bit later.Cornelius Hunterhttps://www.blogger.com/profile/12283098537456505707noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-38620035443450379792012-05-04T10:48:44.820-07:002012-05-04T10:48:44.820-07:00Cornelius, I note that you have completely failed ...Cornelius, I note that you have completely failed to address my point re Sober!<br /><br />"X is a fact" is not a scientific statement. It is therefore not something about which the "consensus" is even interesting.<br /><br />You have demonstrated that more people say it, and in less nuanced ways, than I had been aware, but in none of your examples do I detect any sense that it is a metaphysical statement: Evolution definitely occurred this way, and no other.<br /><br />Evolutionary theory, as I keep saying, is a large body of theory, subject to constant refinement.<br /><br />"Evolution is a fact" is too general a claim to have any clear meaning at all, and clearly different people mean different things by it.<br /><br />Most people seem to mean, simply "the evolutionary framework in which we work is extraordinarily fruitful and it looks as though we've got the key elements essentially right".<br /><br />And I'd agree with them.<br /><br />tbh I don't even see why you think that is a problem. All scientific conclusions are provisional, but some are clearly less wrong than others.<br /><br />Now, what about Sober? Do you now understand that he's not saying that the less evidence there is for a claim, the stronger it is?Elizabeth Liddlehttps://www.blogger.com/profile/02465414316063910821noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-81008031771331685332012-05-04T09:13:36.304-07:002012-05-04T09:13:36.304-07:00EL:
"But I will happily concede that at leas...EL:<br /><br />"But I will happily concede that at least some evolutionist writers are rather too liberal with the F word."<br /><br />Actually this is the consensus position. I don't know of any tradition within evolutionary thought that holds that evolution is *not* a fact. If you can you provide a citation that shows otherwise I'd appreciate it.Cornelius Hunterhttps://www.blogger.com/profile/12283098537456505707noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-39591749940438716322012-05-04T04:56:53.374-07:002012-05-04T04:56:53.374-07:00continued...
You can’t start making fact claims...continued...<br /><br /><br /><br /><i>You can’t start making fact claims because hypothesis X beat out hypothesis Y. </i><br /><br />Agreed.<br /><br /><i>But that’s what evolutionists do. </i><br /><br />Not that I can see. Nobody claims that evolution is true because it's better than ID (although that may be the case). They claim that is true because it consistently generates successful hypotheses. But as I hope is now absolutely clear, I wouldn't even say that. I think it's careless language. But what motivates it is certainly not "well, it beats any other explanation". <br /><br />In contrast, IDists are the ones who claim that their hypothesis is true because it's better (they insist) than the alternative.<br /><br />Dembski lays this out in tedious detail in both his "Explanatory Filter" and in his explicity adoption of Fisherian testing model (if the data fall into the rejection region we must infer Design), ignoring the fact that he tests no evolutionary model.<br /><br />So yes, I agree that we can only have confidence in model if it consistently generates hypothese supported by our observations, not merely if we have rejected some alternative as inferior.<br /><br />You have nicely pinpointed exactly what is wrong with ID!<br /><br />But I will happily concede that at least some evolutionist writers are rather too liberal with the F word.<br /><br /><i>They end up with grandiose claims, even though their theory has astronomically low probability. </i><br /><br />Loose grammar has misled you. A "theory" doesn't have a "probability" (well, only in the sense of how likely it is that someone will come up with it). What you really mean is that the events postulated by the theory of evolution have "astronomically low probability".<br /><br />This is an evidence-free assertion. We simply do not know the probability of the postulated events, and it is impossible to calculate them, not least because you have conflated OOL with evolution:<br /><br /><i>Life and the species don’t appear spontaneously, at least this is not likely under science. </i><br /><br />To take OOL first: we do not know "under science" how "likely" life is, because we do not know just how simple self-replication has to be to be capable of evolution. If very simple, then it may happen with high probability given certain conditions, in which case we might be able to estimate the likely frequency of those conditions in the universe. But we can't even start to calculate it until we have a good model of how simple it could have been.<br /><br />As for the probability of species occurring, we know that this is highly likely, given an evolving population (and that populations observe is an oberved fact). Indeed speciation - at least incipient speciation - is an observed fact (observed in real time, in both lab and field), and the hypothesis that the process explains bifurcations in inferred phylogenies derived from fossils, extant species, and genetics makes predictions that are also supported by observations.<br /><br /><i>But evolution is a fact because, as they say, nothing else makes sense. 1/p is huge.</i><br /><br />No. "Evolution is a fact" because it makes a great deal of sense, and it doesn't have a competitor within miles.<br /><br />As for your 1/p, assuming that "p" is "the probability that evolution is true" - let's see your derivation of p.Elizabeth Liddlehttps://www.blogger.com/profile/02465414316063910821noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-43826038319564510492012-05-04T04:55:56.583-07:002012-05-04T04:55:56.583-07:00EL: Nonetheless, however appealing a scientific th...EL: <i>Nonetheless, however appealing a scientific theory may be to someone with a religious (or even anti-religious) point of view, the test of a scientific theory is not whether we like it, but whether it fits the data.</i><br /><br />CH: <i>Sober’s analysis is an example of why this isn’t true. That’s the rub with contrastive reasoning. There’s nothing wrong with contrastive reasoning, per se, but in the end all you’ve done is compared two hypotheses. </i><br /><br />Yes, of course. And much of the time that's what we do. But we also compare absolute fits.<br /><br />For example, we can fit a linear slope to the relationship between to variables. We can contrast that with the null of "no slope" i.e. no relationship. And if reject the null, we can also quantify the fit - for example by summing the square of the residuals.<br /><br />Confidence in your hypothesis is only loosely related to the confidence with which you reject the null. That is why I say that we do not actually proceed by falsfication of hypotheses in science, on the whole. We say we falsify the null, but if we falsify the null, that is all we say. We do not say our hypothesis is true, we say that the data are more consistent with our hypothesis than with the null.<br /><br />What gives us confidence in our hypothesis are two things: small residuals, and consistently better fits than with a series of alternative hypotheses.<br /><br />For instance, if I reject a null, the first thing I do is try to think up another hypothesis that would also account for my observations, and make differential predictions that would "tease them apart" as scientists like to say.Elizabeth Liddlehttps://www.blogger.com/profile/02465414316063910821noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-9895829871421180102012-05-04T04:03:37.624-07:002012-05-04T04:03:37.624-07:00No, I think you read too quickly on this one. The ...<i>No, I think you read too quickly on this one. The point is simply that the likelihood ratio goes to p/p^2, or 1/p. So you get a high likelihood ratio (and therefore a victory for CA) when p is small. IOW, CA looks good when one is looking at deleterious designs.</i><br /><br />I think you read too imaginatively! Actually, if anything, the opposite is the case. <br /><br />CA simply "looks good" when we are looking at a low frequency event (low p) that has a high probability of observed twice under CA (and non-deleteriousness), but only a p^2 probability under SA. <br /><br />Let's take a really really simple example.<br /><br />We have a paternity case. Husband thinks his wife's second child is not his. His best friend has a mutation that rarely appears spontaneously. So does his second child. Neither he nor his wife have that mutation. <br /><br />Under the hypothesis that his second child is the son of his best friend, the probability that the child will have the rare mutation is .5. Under the hypothesis that the child is the husband's, the probability that the child will have the allele is simply equal to the frequency of that mutation occurring spontaneously.<br /><br />Therefore for the case for "2nd child is not mine" is extremely strong.<br /><br />Now, take a different mutation, one that has a much higher frequency of spontaneous occurrence. Now, the probability of observing the mutation in the 2nd child is much higher under the child-is-husband's hypothesis, although still a lot lower than under the child-is-best-friend's hypothesis.<br /><br />In this case, it doesn't matter much whether the mutation is deleterious or advantageous, although if the mutation causes impotence, then that would load the probabilities on the no-infidelity side. But as long as the mutation does not affect the probability of the affair, all that matters is that the less frequently it appears spontaneously, the stronger the case for the child-is-best-friend's hypothesis.<br /><br />Exactly the same is true in Sober's example, except that the probability that an ancestral mutation will be present in a descendent is much higher if the mutation is non-deleterious, and better still if it is advantageous and therefore "highly conserved". <br /><br />That means that if the same advantageous mutation to appears in both lineages, under CA, the probability that both lineages will have it if a common ancestor had it is high. (Sober sets it to 1 for simplicity). <br /><br />Under CA, therefore, the probability of observing a highly advantageous mutation in both species, is 1*p of the mutation occurring at all, whereas the probability of observing it under SA is the p^2.<br /><br />If the probability of the mutation occurring spontaneously is high, these two numbers will be similar, and therefore not strong evidene for CA. If the probability of the mutation occurring spontaneously is low, the difference between them will be great, and will be much stronger evidence for CA, just as my hypothetical father has a much stronger paternity suit against his best friend if the mutation his friend shares with his son is one that rarely occurs spontaneously than if it is one that commonly occurs spontaneously.<br /><br /><br />I'll address your second point below.Elizabeth Liddlehttps://www.blogger.com/profile/02465414316063910821noreply@blogger.comtag:blogger.com,1999:blog-3855268335402896473.post-36764772228824499372012-05-03T08:42:12.870-07:002012-05-03T08:42:12.870-07:00Thorton:
As a legitimate source for background un...Thorton:<br /><br /><i>As a legitimate source for background understanding of historical religion vs. science legal conflicts</i><br /><br />So the judge is not the only one fooled by the propaganda. To say that *Inherit the Wind* is a legitimate source for background understanding of historical religion vs. science legal conflicts is not even wrong. The script is a two-dimensional, blatantly false rendition of the historical events in Dayton, laughable in its obvious agenda. For a federal judge to cite it as a legitimate source for anything, other than a successful propaganda effort, is astonishing.<br /><br />And you accuse me of shameful misrepresentation? Well at least you are consistent. That is one of the normal modes of discourse for evolutionists. Try to point something out (that is completely uncontroversial and public knowledge), and you get harshly criticized for all kinds of misdeeds, while they meanwhile misrepresent the science and the history.Cornelius Hunterhttps://www.blogger.com/profile/12283098537456505707noreply@blogger.com