Processing math: 100%

Friday, February 28, 2025

Cameron changes his mind about John 6

Cameron Bertuzzi, a YouTuber who converted to Catholicism not long ago, made this post on YouTube linking to this post on substack where he explained how he has changed his mind about whether John 6 teaches transubstantiation. As a protestant, he thought Jesus' statement in John 6:53 about eating his flesh and drinking his blood was a metaphor, but now, as a Catholic, he thinks it's literal. I had a few thoughts on Cameron's post while I was reading it, so I figured I'd go back through and make a blog post about it.

I don't want to talk about everything Cameron said, just a few things that jumped out at me.

Cameron used to think Jesus' statement that "the flesh is of no avail," (vs. 63) undermined a literal interpretation of Jesus' statement that you can have no life in you unless you eat his flesh (vs. 53). I used to think the same thing, but Cameron does make a good point. Since verse 53 refers to "my flesh," but verse 63 refers to "the flesh," they're probably not talking about the same thing. When Jesus said the flesh counts for nothing, that was probably about the fact that you can't have spiritual life by your own effort. You need the quickening power of the Spirit.

Cameron may be right, but it would've been nice if he had explained how his new understanding fits with Jesus' flow of thought in the passage rather than hanging everything on the difference of one word. This is something that jumped out at me throughout Cameron's post. He didn't really explain the passage. He doesn't walk through it or try to make sense of Jesus' flow of thought. I'll cut him some slack, though, because his intention probably wasn't to give a full exegesis of John 6. He just wanted to make a few bullet points.

Cameron no longer thinks the Old Testament command to abstain from drinking blood serves as a good argument against the Catholic position because there are multiple occasions where Jesus superceded cermemonial laws (e.g. regarding the Sabbath, sacrifices, etc.).

I'm not sure that works, though. When Jesus declared all foods clean in Mark 17:18-19, yeah, he did kind of supercede dietary laws, which is why it's okay for Christians to eat bacon. The same thing cannot be said of drinking blood, though. When the apostles had the council in Jerusalem to figure out whether gentile converts had to obey the whole law or not, James explicitly included the command to abstain from blood, and he did not qualify it in any way (Acts 15:19-20). Jesus could not have superceded the command to abstain from drinking blood since that remains a Christian obligation. It's actually pretty striking that James thought this command was important enough to include in his short list of requirements.

Cameron used to think the rabbinic use of metaphor somehow meant Jesus was using a metaphor in John 6, but now he thinks context should decide. I agree with him that context should decide. Unfortunately, Cameron didn't discuss the context. If you read the whole chapter, the context makes it clear that eating and drinking Jesus is a metaphor for coming to and believing in Jesus. Notice the parallel:

6:40: every one who sees the Son and believes in him should have eternal life; and I will raise him up at the last day.

6:54: he who eats my flesh and drinks my blood has eternal life, and I will raise him up at the last day

Speaking of parallels, there's a strong parallel between Jesus's teaching in John 4 with his teaching in John 6. In John 4, Jesus is talking to a women who wants some water. In John 6, he's talking to a crowd who wants some bread. Jesus uses "living water" as a metaphor in John 4:10, and he uses "living bread" as a metaphor in John 6:51, and they both refer to himself as the source of eternal life. Notice the parallels.

John 4 - woman at the well John 6 - bread of life discourse
Whoever drinks the water Jesus gives them will never thirst (John 4:14). Whoever comes to Jesus (the bread of life) will never go hungry or thirst (John 6:35).
Give me this water (John 4:15). Give us this bread (John 6:34)
Water will give eternal life (John 4:14). Bread of God gives life to the world (John 6:33)

Cameron goes on to say, "The crowd’s shocked reaction and Jesus’ refusal to correct their literal understanding undermines a purely metaphorical reading." I responded to this statement in a comment on his post, so I'll just cut and paste what I said here.

The point Catholics often make about the fact that had Jesus given his audience the wrong impression, he would have corrected them strikes me as being problematic. Imagine what you would think had you been there. Keep in mind that you don't have the advantage of hindsight. The last supper hasn't happened yet. Is there anything in what Jesus said that would lead you to believe there would be a ritual meal in which actual bread and wine would be converted into the flesh and blood of Jesus while retaining all the properties and appearances of bread and wine? No, there isn't. So what would your impressive have been had you only listened to Jesus tell you that you must eat his flesh and drink his blood in order to have eternal life, and you took him literally? The only conclusion you could have come to was that Jesus means for you to butcher and eat him, i.e. to butcher the actual man standing in front of you, and to eat the meat off his bones and drink the blood that poured out of his wounds. That's why it was so shocking to his listeners.

There's no doubt that's the impression his audience had, and it was the wrong impression even by Catholic standards. Yet, Jesus did not clarify for his audience that what he REALLY meant was that there would be a ritual meal in which actual bread and wine would be turned into the flesh and blood of Jesus while still looking and tasting like bread and wine. If Jesus had made this clarification for his audience, it might've still struck them as being weird, but it would be nowhere near as offensive or off-putting as the impression he left them with.

So the fact that Jesus didn't clarify or correct the wrong impression he left his audience with does not in any way mean that the impression he left them with was true. Whether you're Catholic or protestant, Jesus left his audience with the wrong impression, and he made no effort to clarify. So Catholics should stop using this argument. It doesn't help you.

One more point I'd like to make is that throughout John 6, Jesus is explaining why the crowd doesn't actually believe in him. It's because they were not given to Jesus by the Father (vs. 36-37), and they were not drawn by the Father (John 6:44). After saying, "But there are some of you that do not believe," he explained, "This is why I told you that no one can come to me unless it is granted him by the Father" (John 6:64-65). The reason he told them nobody could come to him unless the Father granted it is because some of them didn't believe. He was explaining their unbelief. Since Jesus was explaining their unbelief, it wouldn't make sense for him to disabuse them of their objection to what Jesus was teaching about himself. Their rejection of Jesus is recorded for us to illustrate their unbelief and to confirm Jesus' teaching about the necessity of the giving and drawing of the Father.

Cameron used to think the issue of bi-location was an insurmountable problem for transubstantiation, but now he thinks this is just a human limitation he was inappropriately applying to God. Since God can perform miracles, he can perform the miracle of bi-location.

This objection makes me question why Cameron thought bi-location was a problem to begin with. Nobody, as far as I know, raises this objection because they don't think God can do miracles. The problem is deeper than that. God's ability to do miracles does not enable him to engage in absurdity. If I told Cameron that God's ability to do miracles should allow him to make married bachelors, I'm sure Cameron would object. The impossibility of bi-location is not a mere human limitation, and I seriously doubt that's what Cameron thought it was when he was a protestant.

The philosophical problems facing transubstantiation go beyond bi-location, too. There is a problem of identity. Allegedly, the first transubstantiation happened at the last supper when Jesus identified the bread with his flesh, then broke it and gave it to his disciples to eat. How could the bread actually be Jesus' flesh?

Suppose Jesus miraculously turned bread into human flesh, which he can surely do since he turned water into wine. What makes it Jesus' flesh rather than, say, Peter's flesh? If Jesus had wanted to make it somebody else's flesh, what could he have done differently? If all Jesus did was turn the bread into human flesh, there isn't anything that could make it the flesh of somebody in particular.

If I created an exact duplicate of the Mona Lisa, my duplicate would not be the actual Mona Lisa no matter how good of a job I did. It would just be a replica. There's nothing God himself could do to cause my replica be the same object as the original Mona Lisa sitting in the Lourvre. In the same way, there's nothing Jesus could've done to a loaf of bread to cause it to be one person's flesh rather than another person's flesh. The problem isn't that it's a miracle. The problem is that it's a violation of identity. It's very similar to the problem Jehovah's Witnesses face when it comes to resurrection and the problem Captain Kirk faces when using a transporter.

There's another way Jesus might've performed a transubstantiation, though, besides turning the bread and wine into flesh and blood. He could've created a miracle in which the bread and wine instantaneously poofed out of existence while simultaneously causing flesh and blood to poof into existence in exactly the same location. This idea is similar to how wood becomes petrified by replacing wood with minerals, molecule by molecule, except that it happens instantaneously. But this scenario creates the same problem of identity. In this scenario, flesh and blood are being created ex nihilo to replace the bread and wine, and there is nothing that can make the flesh and blood be Jesus' flesh and blood rather than somebody else's or nobody's at all.

Jesus did not lose any body parts when he fed the disciples that night. So whatever the disciples ate or drank, however it was created, it wasn't literally Jesus' flesh and blood. Cameron says that philosophical discomfort doesn't dictate theological truth. I wonder if Cameron's philosophical discomfort with married bachelors still dictates what he thinks God can or can't do.

I think transubstantiation is to Catholics what the Book of Abraham is to Latter Day Saints. It is essential to Catholicism because it is essential to the Mass. It is the most obviously false doctrine of all the teachings of the Catholic church, and since it is essential, it utterly undermines Catholicism. I found Cameron's responses to his old objections so weak that it makes me wonder what was going through his head back when he used those objections.

I've written on this subject a few other times, so I'll leave a few links here for further reading.

Transubstantiation This is my opening to a debate I had on this subject.

Catholic vs. protestant interpretation of John 6 This is my opening to a debate on a broader topic that includes transubstantiation.

An Argument Against Transubstantiation This is something I wrote a long long time ago on a message board that used to exist on Stand to Reason's website.

Catholics and Communion This is a post on Stand to Reason's old blog in which I argued with some people about transubstantiation. I'm "Sam" in the comment section.

Tuesday, February 25, 2025

Zeitoun, Evidentialism, James White, and Cameron Bertuzzi

Yesterday, I watched this podcast by James White where he criticized evidentialism in light of this crazy post where Cameron Bertuzzi claimed that "Zeitoun provides stronger evidence for Christianity than does the Bible." One thing James said that jumped out at me was that, "You don't prove the highest authority by an appeal to lesser authorities" (50:17). This is the crux of his argument against evidentialism and for presuppositionalism. Whenever we appeal to something external to the word of God for verification of God or the word of God, we are appealing to a lesser authority to prove a higher authority.

Presuppositionalists begin with the Bible. Since the Bible contains the words of God, and God is the highest authority, there isn't anything external to the Bible that can serve as evidence for the veracity of the Bible. Since God is the highest authority, he can't appeal to anything higher to guarantee the truth of his own words. He can only swear by himself (Hebrews 6:13). This is the heart of the presuppositional point of view.

I wonder, though, if this is all consistent with what James has said about the Canon. I remember James saying on a few occasions that we don't have a divinely inspired table of contents for the Bible. James rightly makes a distinction between what makes something part of the Canon, and how we recognize that something is part of the Canon. What makes it Canon is that God inspired it. I'm not entirely sure how James thinks we recognize what belongs in the Canon.

James thinks the scriptures are self-authenticating. I'm not sure what that means. If it means the Scriptures attest to their own truth, that's true. 2 Timothy 3:16 and various other places confirm the truth of scripture. But I've heard other people talk about "self-authentication" in a different way. They say it has more to do with the truth of scripture being self-evident. So you should be able to read the Bible and recognize that it's the word of God. I don't know for sure if that's what James thinks or not.

If that is what he thinks, then the Canon could be settled by appeal to self-authentication. I would be surprised, though, if James thought we could know the Canon that way. I don't know if anybody in the history of the church has attempted to come up with a table of contents for the Bible based merely on "recognizing" the voice of God when reading the scriptures.

So how do we know the Canon if not by appeal to self-authentication? It seems to me the only way to know is by looking at historical evidence. We look at evidence of who wrote the scriptures, how early they were, whether they cohere with the rest of what is accepted, what the early church said, etc. History is a fallible process, though. If it is through history that we know which books contain the word of God, then aren't we appealing to a lesser authority to prove a higher authority? I would love to know what James thinks about this. Since he doesn't think we have a divinely inspired table of contents, then doesn't he ultimately need a lesser authority to prove a higher authority? He needs some fallible evidence or line of reasoning in order to demonstrate which books contain the word of God and which don't. If James appeals to history as evidence for some particular book being the word of God, then he's being inconsistent with his claim that you can't prove a higher authority by appeal to a lesser authority.

There are a couple of issues I have with James' claim that you can't prove a higher authority by appeal to a lesser authority. One problem I have with this claim, at least as he applies it to evidentialism, is that an appeal to evidence is not an appeal to authority. An appeal to authority is when you take somebody's word for something because you believe that person knows the truth. You trust a doctor to diagnose you because they are experts in medicine. You take a lawyer's legal advice because they are experts in the law. A Catholic might take the Pope's word for some theological truth because they think the Pope knows what he's talking about. But that is not how appeal to evidence works. Appeals to evidence are not appeals to authority, so evidentialism does not amount to appealing to a lesser authority to prove a higher authority.

A second problem I have with James' claim is that it seems to confuse or conflate the reliability of how you came to believe the Bible is God's word with the reliability of the Bible itself. It is possible for the Bible to be 100% reliable without you knowing it with certainty. There is nothing inconsistent with believing the Bible to be the infallible word of God even though you're not 100% certain about it. I think James is just wrong to say you can't use a lesser authority (or less than certain evidence) to demonstrate a higher authority. I think James is making the same mistake he made when criticizing Cameron Bertuzzi for using Bayesian reasoning to evaluate the probability that the Papacy is legitimate, which I exlained in another post.

Interestingly, James appears to be making the same mistake that Catholic apologists make when they challenge protestants on Sola Scriptura. The Catholic argument assumes that before you can know that any book is an infallible source of authority, you need another infalliable source of authority to tell you so. You need one infallible source to tell you about another infallible source. Catholics have the infallibility of the Church and/or Tradition to tell them what books belong in the Canon, but since protestants reject the infallibility of the Catholic Church and Tradition, protestants supposedly can't know the Canon.

However, this idea that you need an infallible source to tell you what sources are infallible is clearly wrong, and it seems to me that both James White and Catholic apologists are inconsistent in this area. If you need an infallible source of authority to establish an infallible source of authority, then you're either going to face an infinite regress or resort to a circular line of reasoning. There's no escaping it.

Catholic apologists often go the circular route. They believe they need an infallible Church to tell them what books are the infallible word of God. But how do they know the Church is infallible? Well, they allgedly know that because of passages like 1 Timothy 3:15. And again, they know 1 Timothy is the infallible word of God because the Church says so.

Every time I've pointed out the circularity of this reasoning to Catholics, they have attempted to avoid circular reasoning by appealing to historical arguments for the authority of the Church. So they eventually have to resort to fallible evidence to establish an infallible source of authority. If you can establish an infallible source of authority by appealing to a fallible line of reasoning or assessment of evidence, then there's no reason you can't establish the list of infallible books by appeal to fallible evidence and reasoning.

Since James doesn't think there is an infallible table of contents for the Bible (i.e. there's not an infallible list of books that belong in the Bible), he has no choice but to appeal to some fallible evidence and reasoning to establish which books are actually the infallible word of God. James has to do exactly what he criticizes evidentialists for doing. He has to engage in evidential arguments to prove what books have infallible authority. He has to prove a higher authority by appeal to a lesser authority.

He does the same thing when it comes to textual criticism. The actual words inspired by God are infallible, but James relies on the fallible methods of textual criticism to establish what those words are. He uses a lesser authority to establish a higher authority.

Before I go, I want to make sure I'm not misunderstood. Cameron claimed that the Marian apparition at Zeitoun is better evidence than the Bible for the truth of Christianity. James attacked this claim by attacking evidentialism in general. I attacked James' argument against evidentialism, but I don't want anybody to get the wrong idea and think I'm defending Cameron's claim. I think Cameron's claim is absolute nonsense. Maybe I'll blog on that at another time. In the meantime, you could watch James' video I linked to above. Besides his miguided criticism of evidentialism, he does have some valid arguments against Cameron's claim.

Friday, February 14, 2025

Protein evolution probability, take three

Wow, this is my third post in a week on this one topic. You'd think I found it interesting or something!

I've been reading around to try to find out how controversial or accepted Douglas Axe's 1 in 1077 functional protein estimate is, and it turns out it's very controversial. There have been other estimates made by other people in which the ratio of functional to non-functional proteins are a lot higher than what Douglas Axe estimated. This paper, for example, estimates that 1 in 1011 proteins are functional. It says,

In conclusion, we suggest that functional proteins are sufficiently common in protein sequence space (roughly 1 in 1011) that they may be discovered by entirely stochastic means, such as presumably operated when proteins were first used by living organisms. However, this frequency is still low enough to emphasize the magnitude of the problem faced by those attempting de novo protein design.

Since this estimate is many orders of magnitude greater than what Douglas Axe estimated, I want to do a rough back-of-the-napkin estimate of what the probability is of getting a functional protein just in the Milky Way Galaxy within 1 billion years and some much stingier probablistic resources than I used in my last couple of posts on this subject (here and here).

I'll assume there are 100 billion stars in the galaxy, 7% are G-type stars, only G-type stars are working on the problem, and only 20% of them have planets in the habitable zones. That's 1.4 x 109 planets working on the problem.

I'll assume the same proportion of carbon, oxygen, hydrogen, and nitrogen in the lithosphere of each planet, but only a small fraction is available to try to make proteins. Instead of taking the elements out of the entire lithosphere, I'll take them out of a volume about the size of Crater Lake.

I asked two different AI's to estimate the mass of the water in Crater Lake. One said about 1013 kg, and the other said about 1012 kg, so let's go with 1012 kg. I'll spare you all the details I didn't spare you last time and just tell you I calculated that there would be 2.5 x 1036 carbon atoms which allows you to make 1.67 x 1011 proteins with 300 amino acids each.

With 1.4 x 109 planets making 1.67 x 1011 proteins per second for 1 billion years (i.e. 3.1536 x 1016 seconds), that comes out to a total of 7.37 x 1036 tries in all. Let's simplify that to 1036 and plug it into our equation to get the probability of finding a functional de novo protein.

1(111011)1036

There you have it. It looks like you'd be guaranteed to find a functional protein. Again, I have no idea if the estimate for the fraction of functional to non-functional proteins is correct, so I still don't know if these calculations are worth anything. But based on these estimates, it looks like it's very likely you could get de novo proteins, even with stingy probablistic resources, somewhere in the galaxy.

Unless I hear of some solid uncontroversial estimates of the ratio of functional to non-functional proteins of average length, I think I'm probably going to say the argument against evolution from the improbability of de novo protein evolution is not a good argument. It relies too heavily on controversial estimates. It may turn out to be valid if more information comes in, but we'll just have to wait and see. It could also be made valid by taking into consideration more of the details about how proteins are made and how cells work. More knowledge about exo-planets and the chemistry in the early earth may also contribute.

Some final thoughts

I emailed Mr. Pruett, who I mentioned in the first post, to solicit his feedback on that first post. He knows a lot more about this topic than I do. Based on what he said, there's a lot more complications in coming up with probablities than are reflected in my thought experiment. For example, I ignored how genes actually work, including all the machinery needed to build proteins. I ignored the fact that genes can be altered somewhat without altering the resulting protein. There's also the issue of some proteins requiring other proteins in order to fold up correctly. They don't all just fold themselves. A realistic thought experiment, I'm afraid, would be really complicated.

My strategy has been similar to what we used to do in my calculus classes in college. I remember in one of the classes, we had to figure out whether an equation that spits out a series of numbers was convergent or divergent. If the equation is too complicated to figure that out, you can simplify the equation in such a way that you know it's either more or less likely than the original equation to be convergent or divergent. If you're testing for convergence, and you know your simplification is less likely to be convergent than the original equation, but it converges anyway, then you know your original equation is convergent.

Mr. Pruett also pointed out that I over-complicated part of my calculation. I could've just started with 1080 atoms in the universe and figured out how many of them are carbon atoms, and gone from there. I didn't have to talk about star types, habitable planets, lithospheres, etc.

Mr. Pruett made a good point I wish I had considered. I gave very generous time constraints on building proteins, but if I wanted to test de novo genes in already existing species, those appear to pop up pretty quickly in nature. The Cambrian Explosion only lasted maybe 30 million years, and lots of new genes (and their corresponding proteins) had to have come into existence during that short window of time. That's three orders of magnitude less time than my original 13.8 billion year estimate and two orders of magnitude less than my more restricted estimates of 1 to 5 billion years.

Mr. Pruett made an interesting psychological point. Suppose we calculated that it's nearly impossible for the universe to cough up certain functional proteins, but we went out in nature and discovered that they exist. It's unlikely that a biologist would say, "Wow, that's a miracle." It's more likely they would say, "I guess nature is more clever than we thought." When it comes to trying to figure out whether nature could do something on its own or whether it needs divine assistance, our worldview presuppositions are probably going to carry more weight than our calculations.

I'm not saying necessaily that it shouldn't. After all, a person might have good reason for subscribing to their worldview. If I make some calculation that allows me to make a prediction about what I should expect to find in nature, and I go out in nature and find that things are very different, I probably should doubt the assumptions that went into my calculation. I mean that's how science works. You come up with a hypothesis, you make a prediction based on your hypothesis, and you test it by making observations to see if the prediction pans out.

I think what the protein evolution probability argument attempts to do is not test the assumptions that go into the calculation, but to test the worldview of naturalism. If you assume naturalism as part of your hypothesis, and you use various assumptions to make a calculation that predicts something about proteins, and you go out in nature and find out that your prediction was wrong, that is supposed to cast doubt, not on the assumptions that went into your calculation, but on the assumption of your worldview. Somebody who subscribes to naturalism who runs the same experiment and falsifies their prediction is going to questions the assumptions that went into their calculation rather than their naturalistic worldview. And maybe they should. I don't know. I guess at that point it depends whether you're more sure about your worldview or you're more sure about the assumptions that went into your calculations, not to mention your confidence in entering them in your calculator correctly.

Anyway, thank you for joining me on this journey. It's been interesting for me.

Thursday, February 13, 2025

Alvin Plantinga and Sean Carroll are on the same page

I recently read an article by Sean Carroll called "Why Boltzmann Brains Are Bad." What jumped out at me when I read this article was how similar it was to Alvin Plantinga's Evolutionary Argument Against Naturalism (EAAN).

Boltzmann Brains are not in a position to know true from false because all the information that comes their way just fluctuated into being without having any connection with reality. This could happen because the information fluctuated inside their brains, or it could happen because the world in their immediate vicinity fluctuated into existence. Either way, they cannot use their perceptions or any of their tools of reasoning to reliably come to true beliefs about the world.

If you have a model of the universe that predicts you are a Boltzmann Brain, then that model undermines any justification you would have for believing that model. The model is self-stutilfying because as soon as you believe it, for whatever reason, you lose your justification for believing it.

Carroll thinks this is a good reason to reject models that generate Boltzmann Brains. Since Boltzmann Brains are "cognitively unstable," we shouldn't even consider models that generate them. They could still be true, of course. It's just that we could never be justified in believing them since they undermine the reliability of the very process we used to come up with them.

This argument is just like Alvin Plantinga's EAAN. According to Plantinga, if both evolution and naturalism are true, then it's unlikely our brains would be able to reliably distinguish between true and false. Evolution combined with naturalism generates unreliable belief-producing cognitive faculties. So if we believe in both evolution and naturalism, then we have an undercutting defeater for all of our beliefs, including our belief in evolution and naturalism.

In both cases, they are considering models of the world that generate unreliable belief-producing brains, and they are both saying that even though it's possible for such models to be true, we can never be justified in believing them. We shouldn't even consider a model of the world that makes it likely that we can't tell true from false because if we can't tell true from false, then we can't know whether the model is true or false.

Neither of them claim to have proved these models to be false. They only claim to have shown the models are not reasonable to believe or even consider.

Wednesday, February 12, 2025

Fraser Cane against the fine-tuning argument

Fraser Cane, one of my favourite science news commentators on YouTube, recently made a video where he explained why he doesn't think the fine-tuning argument is a good argument (begining at the 4:13 point in the video). He gave a few of the standard responses, and I didn't think any of them were good responses, so I'm going to respond to them.

Most of the universe is uninhabitable

First, he said the universe is only barely habitable. The vast majority of the universe is uninhabitable. First, you have all the vast emptiness of space. Then you have stars that can't support life. Then most planets are also lifeless. Then, only the thin surface of some planets (like earth) are habitable.

This is not a good argument against the fine-tuning argument, and there are a few reasons. One reason is because it doesn't dispute the fact that if you changed any of the laws or constants by a hair, life wouldn't be possible at all. As I explained on on another post, the universe could be fine-tuned for the possibility of life even if there happened not to be any life at all. The existence of just one life form proves that the universe is habitable. If the constants of nature have to be fine-tuned before that could be possible, then the universe is fine-tuned for life even if life is extremely rare.

A second reason that I also mentioned on that post is that even given ideal conditions, the actual emergence of life might be an extremely improbable event. I discussed that in two posts recently where I tried to calculate the probability of getting a functional protein given the vast probablistic resources in the universe. My estimates and assumptions were rough, but based on them, it looks like we should expect the actual emergence of life to be rare. But the fact that it's even possible means the universe is fine-tuned.

A third problem with this argument is that empty space is necessary for habitability. Imagine if the entire universe were filled with a life-friendly atmosphere like here on earth. If that were the case, there would be two major problems. One problem is that there would be too much mass, causing the universe to collapse, ending any chance of life. The second problem is that there couldn't be any stable orbits. You need empty space so there isn't friction when planets orbit stars and stars orbit galaxies.

The universe would have to be habitable for us to be observing it.

The second point he makes is the anthropic principle. The universe would have to be habitable for us to be here observing it.

This is not a good response to fine-tuning either. If a thousand people aimed their rifles at me and fired, but they all missed, nobody would say, "There's nothing remarkable about the fact that you're alive since you'd have to be alive to consider whether there's anything remarkable going on." Of course it would be remarkable if I were alive! Me being alive would require an explanation because of how unlikely it would be for me to survive that many people shooting at me.

The anthropic principle is a version of the observer-selection effect. The observer selection effect would explain why we find ourselves in a habitable universe rather than an uninhabitable universe if we assumed both kinds existed (e.g. if we assumed a multiverse with random combinations of laws and constants). If there is a multiverse, and the vast majority of univereses were uninhabitable, the anthropic principle would explain why we find ourselves in one that's habitable. It's because a habitable universe is the only kind of univeres that can be observed. All observers observe habitable universes.

The anthropic principle only works as an explanation of fine-tuning if you combine it with a multiverse. But Fraser doesn't even suggest a multivere. If there are far more ways the universe could've been uninhabitable than there are for the universe to have been habitable, and there's just one universe, then the probability is that the one universe would be uninhabitable. The fact that we're alive at all shows that the universe is habitable. That requires an explanation just as being alive in the firing squad analogy requires an explanation. Why has the most unlikely thing happened? It won't do to dismiss the question on the basis that if it hadn't happened, we wouldn't be around to wonder about it.

I said more about this argument, including the puddle analogy that's often invoked, here.

Any universe is improbable.

A third thing he said was that if you threw a dart out of an airplane, no matter where the dart lands, it's improbable that it would've landed at that particular spot.

That's an argument I used to have as a college freshman against teleological arguments in general, but that's a terrible argument. As somebody who has a basic understanding of entropy and the second law of thermodynamics, Fraser ought to know better. While any random arrangement of parts in a closed system is equally improbable, there are certain kinds of arragements that are less probable than other kinds. Some are ordered kinds, and some are random kinds. To use an analogy, imagine dumping a box of alphabet cereal on the floor. Any random arrangement is equally improbable, but there are certain kinds of arragements (namely, the kind that spell words and sentences) that are far less probable than other kinds (namely, the kinds that don't spell words or sentences). In the same way, any random combination of values for the constants of nature might be equally improbable, but the combinations that result in habitable universes are extremely rare. That's the real issue.

We just don't know why the universe is habitable.

The fourth thing he said was that the fine-tuning argument shuts down scientific inquiry. If we don't know why the universe appears to be fine-tuned for life, we should say, "I don't know," and try to find out instead of suggesting God did it.

The problem with this argument is that it begs the question against a theistic explanation. It just assumes theism is the wrong answer. This is a response one could use against any hypothesis.

Consider the big bang as an explanation for the CMBR and the red shift of distant galaxies. One could just as easily invoke Fraser's argument and say, "I don't know why there's a CMBR or why there's a red shift to distant galaxies" instead of suggesting a big bang did it. You could run the same argument against cosmic inflation.

If Fraser doesn't think God is a good explanation, he needs to say specifically why. Is there a better explanation? Is God an insufficient explanation? Is there some reason to think God doesn't exist? Any of these could be a good reponse, but that's not where Fraser goes.

Imagine applying the same reasoning to an alleged crime scene. You look around and see what appears to be evidence that a murder took place, but your supervisor says, "Hey, if you don't know how the person died, then don't just assume a murderer did it. You should suspend judgment until you find out what did cause the death." Well, if everything at the crime scene points to a murderer, then that's what you should think is the explanation.

If you have good reason to think you've identified the correct explanation for some observation, then you're perfectly within your rights in concluding that your explanation is correct. There's no reason to say, "I don't know," and wait for the alledged right explanation as if you don't already have the right explanation.

Coming to a conclusion about the correct explanation for your observations doesn't mean you shut down inquiry. You can hold your belief provisionally and be open to changing your view if new information comes along. But you don't need to suspend judgment when you have evidence that points to a particular explanation.

Notice that nobody ever says the same sort of thing about any other explanation besides God. When they came up with the dark matter as an explanation for flat galaxy rotation curves, nobody said, "Don't use dark matter to explain galaxy rotation because that shuts down scientific inquiry. Instead, hold out for a better explanation." Dark matter didn't put an end to inquiry. People still proposed other explanations, like Modified Newtonian Dynamics (MOND). Whether you think the correct explanation for flat galaxy rotation curves is MOND or dark matter, you are free to be open to new information that might point to a different explanation. Having an explanation doesn't shut down further inquiry.

Science is provisional. So is every field of inquiry. We make our best conclusions based on the evidence that's available to us. We don't withhold judgment about every single conclusion we come to merely on the basis that it's possible some new piece of information will come along in the future that overturns what we previously thought we knew. We don't stop investigating the world or testing what we think we know just because we think we already have the right answers. So there's no reason in the world to think that belief in God as the explanation for fine-tuning will put a stop to scientific inquiry.

Tuesday, February 11, 2025

Functional protein probabilities using ChatGPT's estimates

Yesterday, I made a post talking about the probability of one functional protein 200 amino acids long being formed through undirected processes somewhere in the observable universe. I had to make a lot of guesses, but to give our protein its best chance, I made very generous estimates. Based on my estimates, I calculated a near 100% probability of the universe spitting out at least one functional protein 200 amino acids long.

Today, I thought I'd see what ChatGPT would say. I'll use the same probability equation, and the same line of reasoning, but I'll let ChatGPT come up with my estimates for me. Whenever ChatGPT gives a range, I'll use the upper end of the range (with one exception). Here's what ChatGPT said:

Stars in the universe: 1 x 1021

Fraction of stars that could host planets in the habitable zone: 25%

How much carbon, nitrogen, hydrogen, and oxygen are on an average planet like earth?:

For rocky planets like earth. . .

H: 2%
C: 0.5%
N: 0.3%
O: 50%

ChatGPT didn't say, but I'm going to assume those percentages are by mass. It looks like ChatGPT is just considering earth's crust, too, which is good. That's what I want.

I wanted to know which of these would be the limiting factor, so I asked ChatGPT how many of each atom we would have if we took one of each of the 20 usual amino acids and added up all the hydrogen, carbon, nitrogen, and oxygen in them. A couple of them have Sulpher, but I'm going to ignore that for simplicity. ChatGPT said,

C: 101
H: 161
N: 29
O: 49

It looks like either carbon or hydrogen is going to be the limiting factor. Let's go with carbon.

What is the average length of a protein?

ChatGPT said 300 to 400. This time, I'm going to go with that lower limit of 300.

What is the average lifespan of a star?

Chat GPT gave three estimates--one for red dwarves, one for high mass stars, and one for sun-like stars. The red dwarves live a really long time, but are mostly uninhabitable because of how active they are, and massive stars don't live very long at all, so I'm just going to go with sun-like stars. The average there is 10 billion years.

It seems unreasonable to use the entirety of earth's mass in my calculation because proteins aren't going to form in the mantel or in earth's core. So I asked ChatGPT how much of earth's mass makes up the lithosphere. ChatGPT said 1 to 2%, so I'm going to go with 2%. Earth's mass is 5.7 x 1024, so the lithosphere must be 1.14 x 1023 kg.

Let's do some calculations.

First, I'm still going to assume 1 try per second.

I'm going to assume all the amino acids are in one big soup.

The mass of the earth's lithosphere is 1.14 x 1023 kg. 0.5% of that is carbon, so there's 5.7 x 1020 kg of carbon in the lithosphere. An average carbon atom weights 1.99 x 10-26 kg, so there are about 2.86 x 1046 carbon atoms in the lithosphere.

You need 101 carbon atoms for a full set of the 20 standard amino acids, so with those carbon atoms, you can create 2.83 x 1044 full sets. Each set has 20 amino acids, so that's 1.42 x 1043 individual amino acids per planet.

An average protein has 300 amino acids, so that's 4.73 x 1040 proteins per planet. That's going to be the number of tries per second per planet.

There are 1 x 1021 stars, and 25% of them have planets in the habital zone, so that's 2.5 x 1020 planets in the habitable zone.

Although earth has 10 billion years, only 5 billion of that will have life on it. Proteins need to form in a shorter span than 5 billion years if there are to be multiple species and diversity, so I'm going to give each planet 2 billion years to create an average protein. That's 6.31 x 1016 seconds.

Now, I think we can calculate the number of tries.

(1 try per/sec) x (6.31 x 1016 sec) x (4.73 x 1040 proteins/planet) x (2.5 x 1020 planets) = 7.46 x 1077 protein tries. This is getting interesting.

Now, we can plug that into our equation using the Douglas Axe estimate of 1 functional protein for every 1077 proteins of a given length. He used 150 amino acids, but I'm assuming the fraction is the same for all lengths.

1(111077)7.46×1077

The exponent is pretty close to the denominator, so we could get a real probability here. Since I can't put those huge exponents in my calculator, I played around. I tried replacing the 1077 in both places with 2, 10, 100, 1000, and 1,000,000. I got pretty close to the same result each time, so I'll bet that's what it is. The probability came out to be 99.9%, which means you'd be practically guaranteed to get a functional protein.

It is possible that I made a math error. I've gone through and corrected myself two or three times since posting this, so there's a possibility I could go through it again and find another mistake.

A lot of these numbers are speculative. I guess you can get whatever probability you want depending on how you massage the numbers. You can be generous or stingy with your assumptions. As I said in the last post, I think the pivotal unknown is the fraction of proteins of a given length that could be functional out of all the possible sequences of amino acids in a given length. I suggested in the last post how we might be able to figure that out with the new AlphFold AI thingy. Since nobody has done it, as far as I know, I used Douglas Axe's estimate, which, as I explained in the last post, I'm not so sure about.

One thing I learned in this whole thing is that if you're just looking for any functional protein, the length of the protein doesn't figure into the probability (except when you're determining how many proteins you're going to get per planet with your available amino acids). All that matters is what fraction of proteins of any given length will be functional. That fraction may, for all we know, be the same regardless of length. But like I said in the last post, we don't necessarily know that the fraction is the same in all lengths. The only way to figure that out is through experimentation or simulation. Assuming it's the same for all lengths, the length only figures into the probability if you're looking for one particular sequence of that length. Then the length matters a great deal to the probability.

You could make the length relevant if you considered the probability of different lengths with any sequence. It does seems like the longer a sequence is, the less probable it is. On the other hand, that may have a lot to do with how it is formed. If you had two proteins 200 units long, and they merged in one event, you'd have one 400 units long. That would be easier than if you had one 200 units long and it mutated through successive generations until it grew to 400 units. It's probably simpler to leave this probability out.

One interesting thing I took from this is that if you ignore the 1 x 1021 stars in the universe and all the planets surrounding them, and you focused only on earth, the probability of getting any functional protein on earth would be almost non-existent. But if you include the whole observable universe, then you're guaranteed to get the functional protein somewhere in the universe. So there's a sense in which we really did win the lottery here on earth.

That's assuming, of course, that there's some validity to my thought experiment. It is, admittedly, speculative. It uses a lot of really rough estimates and simplifications. If there is some validity to it, it would answer the Fermi paradox. Life in the universe is extremely rare. Advanced intelligent life like ours even more so.

Wait! There's more! I wrote a third post on this topic after looking further into estimates for functional to non-functional amino acid sequences and after getting some feedback from Paul Scott Pruett.

Monday, February 10, 2025

Evolution's mathematical obstacle

There are a few equations I put in this post using a new trick I learned recently. Sometimes, the equations look really tiny. If that happens to you, just hit the refresh button, and they should be big again. The issue may just be my browser.

One of the most interesting things I've read or heard about concerning the mathematical obstacles to evolution is an argument that says the probability of getting just one functional protein of average length in the entire history of the universe, even given unrealistically generous probablistic resources, is so vanishingly small, that it's not reasonable to believe that new proteins could form through undirected natural processes.

One particularly good presentation of an argument like this is "The Statistical Case Against Evolution" by Paul Scott Pruett. He notes that there are examples of convergent evolution, not just on the macro scale, but on the scale of genes and proteins as well. That means nature seems to aim at particular target proteins. The odds of getting particular target proteins are much smaller than the odds of getting just any functional protein, yet nature seems to produce the same target proteins over and over.

I also saw this video on YouTube, but based on what Scott told me, it's a little sketchy. Scott thought some of his assumptions were either too generous or just arbitrary. I have issues with this presentation, too, but it's at least easy to understand.

I have been skeptical of this argument for a number of reasons, but I thought it might be interesting for me to try to work through the line of reasoning myself and see what I come up with. While trying to work through it, I came up against some unanswered questions that prevented me from completing the argument. I've put this blog post on hold and revisited it from time to time over the last few years, but now I thought I'd just make a post about where I'm at. Maybe if I posted about my unanswered questions and why I think they are relevant, somebody will have something to say about them.

So, here we go.

An amino acid is an organic molecule made of hydrogen, carbon, oxygen, and nitrogen. There are 20 different kinds of amino acids that make up the proteins in all life on earth. Proteins are strings of amino acids. Depending on the length of these sequences and their order, proteins can be folded into stable shapes. The shapes of the proteins are what give them their function. You can think of them like car parts.

The amino acids can be strung together in any order. You can think of them like letters in the alphabet. You can string a bunch of letters together in any order. Just as some of those arragements will produce gibberish while other arrangements will produce coherent words and sentences, so also some sequences of amino acids can be folded into stable functional proteins, and others cannot.

Proteins come in different lengths, but the average protein is about 200 amino acids long. Since each position along the string could contain any of 20 different amino acids, there are 20200 possible sequences in a protein that's 200 amino acids long.

If you were to randomly pick out a sequence of amino acids 200 units long, the odds that you would get any one particular sequence would be 1 in 20200, which is pretty small. However, there are more things to consider in this argument, which brings me to some of my unanswered questions.

Any low probability can be overcome if you have enough chances for it to happen. If you were trying to guess the combination of a lock, you may have a 1 in a million chance of making the right guess on the first try, but if you tried a million times, you'd have a good chance of guessing correctly in at least one of those tries.

Let's suppose you want to aim for a particular sequence of amino acids 200 units long. You start on day one at the beginning of the universe, and you do one try per second continuously until today. There's no need to be precise, so roughly. . .

13.8 billion years x 365 days/year x 24 hours/day x 60 minutes/hour x 60 seconds/minute = 4.35 x 1017 seconds

With that being the case, what are the chances of getting the correct sequence if you made one attempt every second for 13.8 billion years?

Let me make a detour here and clarify something I used to be confused about.

If you have a six sided dice, and you rolled it, the odds of getting any given number would be 1 in 6, right? So you'd think that if you rolled it six times, the odds of getting any given number would be 1. In other words you'd be guaranteed to get the right number. But that obviously isn't right because it's possible to roll it six times and never get a 2. So here's the correct way to figure out the probability of getting a 2 if you rolled it six times.

Each time you roll the dice, you have a 5 in 6 chance of not getting a 2. So if you roll it six times, the probability of not getting a 2 on all six rolls can be given by,

(56)6

Since that's the probability of not getting a 2 in the six rolls, you can subtract that number from 1 to get the probability that you will get a 2.

1(56)6

That comes out to a 66.5% chance, or about 1 in 1.5, which is obviously lower than a 100% chance.

Now, let's apply that same reasoning to figure out what the chances are of getting our target protein. Since the chances of getting the right sequence in one try is 1 in 20200, the chance of not getting the right sequence is,

20200120200

And the chances of getting the right sequence in 4.35 x 1017 tries is,

1(20200120200)4.35×1017

Unfortunately, my dinky calculator can't handle those kinds of numbers, but if you think about it, the probability is really small. That means if you were to make one random attempt every second for 13.8 billion years to get a particular sequence of amino acids 200 units long, there's almost no chance that it would happen. But I would love to see the actual number.

We can improve these odds, though. We know that in reality, there could be tries happening simultaneously all over the universe each second (relativity of simultaneity notwithstanding). Let's imagine some really generous probablistic resources to improve our odds.

The internet estimates that there are 1050 atoms in the earth. Let's suppose all these atoms are just hydrogen, carbon, oxygen, and nitrogen, that they are currently part of amino acid molecules, and that they are all joining in the effort to make our target protein. The average amino acid contains 10 atoms, so there would be about 1049 amino acids that make up the earth.

1050atoms10atoms/amino acid=1049amino acids

The internet also estimates that there are 200 billion trillion stars in the universe. That's 200,000,000,000 x 1,000,000,000,000 = 2 x 1023 stars. Let's imagine there are two earth-like planets for each star, and they have all existed for the entirety of the 13.8 billion years of the universe. That's 4 x 1023 earth-like planets, all trying to make this one protein.

In that case, there would be 4 x 1072 amino acids available to make proteins.

1049amino acids/planet4×1023planets=4×1072amino acids

Since each try uses up 200 amino acids, there are 2 x 1070 tries going on each second.

4×1072amino acids200amino acids/try=2×1070tries

Since we already figured out that there are 4.35 x 1017 seconds in 13.8 billion years, that means there are 8.7 x 1087 tries in the history of the universe.

2×1070tries/sec4.35×1017sec=8.7×1087tries

Now we can adjust the original probability we got to account for all these generous probablistic resource. Now, we get,

1(20200120200)8.7×1087

That's an improvement, and although my dinky calculator can't give you the actual number, you should be able to tell that it's still an extremely small number. Look at that fraction. The numerator and denominator are almost exactly the same because if you subtract 1 from a number as big as 20200, you haven't subtracted much, relatively speaking. That means the fraction is extremely close to 1, which means that number raised to 1087, though smaller, is still going to be very close to 1. And that means 1 minus that number is going to be very close to zero. And that means there's nearly a zero percent chance of getting the target protein.

Up until now, we have only been trying to calculate the odds of getting one specific sequence of amino acids 200 units long. But if we are just trying to find out what the odds are of getting any functional protein given the same probablistic resources, our odds should greatly improve. The reason is because for any sequence of amino acids of some given length, there is more than one sequence that could be functional.

There are two things necessary for a protein to be functional. First, it needs to be able to fold up into a particular shape and hold that shape. Second, it needs to exist in an environment in which it serves a purpose. I'm going to ignore that second requirement for the sake of this thought experiment because that would complicate things. Whether a protein serves a purpose depends on the shape of every other protein in its evironment. For the purposes of this thought experiment, I'm going to assume that any protein that can fold up into a stable shape has the potential to be functional. I just want to know what the odds are of getting any potentially functional protein in the history of the universe given our generous probablistic resources.

It is at this point in the game that I have run up against a wall. To continue the thought experiment, I need to know, out of all the 20200 possible sequences of amino acids in my protein, what fraction of them are capable of folding up into a stable shape.

We know already that you can alter a few of the amino acids in a sequence and still end up with the same functional protein. If that weren't the case, we'd all be genetically identical. It is our genes that store the information to build our proteins. Two people can have the same gene that codes for the same protein, but there will be slight differences between them. Those differences are what make us genetically unique. It's why DNA evidence is useful in criminal investigations. It's also why 23andME can find your relatives. The closer the relation, the more similar the DNA sequence.

Besides variations in the same protein, you can have completely different proteins (i.e. proteins that fold into a different shape and perform a different function) that are the same length, or close to the same length.

If all we looked at were the proteins that exist in nature, almost all of them are functional. Otherwise, nature wouldn't have preserved them. So we can't just look at the existing proteins to estimate how many sequences in 20200 could be functional. I've seen people make that mistake.

It would be great if we could build that many and just see for ourselves what fraction of them fold into stable shapes. But 20200 is too many, and they're not easy to make anyway. Another way is to use computer simulations. We could just have a computer predict how they would fold.

Predicting how a sequence of amino acids will fold up has been a nortoriously difficult problem for a while now. Veritasium recently posted a video about it you should check out. Mithuna Yoganathan at the Looking Glass Universe channel also made a video about it a while back. The good news is that it looks like, thanks to AI, the notorious protein folding problem has been solved. AI can now predict, with 90% accuracy, how a given string of amino acids 30 units long will fold up. Until this breakthough came along, I don't see how anybody could possibly know what fraction of proteins of a given length could fold up into stable shapes. Now, it looks like it's possible to figure it out.

How would they do it, though? One way would be to try every sequence. There's probably not enough computing power for that, though. It might work if you were only considering sequences 10 or 20 units long, but if you try 200 units long, no computer has that kind of power.

Another way is to try a representative sample size and extrapolate. Maybe they could try a million random sequences to see what fraction of them fold up into stable shapes. Then they could try another million and see if they get the same fraction. If they do, then they can extrapolate to the whole 20200 possibilities and estimate the fraction of them that can make functional proteins. Will somebody out there please try this? I would love to know.

I was recently reading Stephen Meyer's book, The Return of the God Hypothesis, and I was relieved to see that Meyer addressed this issue I was having. This exerpt gave me hope that I was at least thinking it through correctly. He said,

Nevertheless, when I first met Denton, he told me that it was not yet possible to make a conclusive mathematical determination of the plausiblility of a random mutational search for new functional genes and proteins. Molecular biologists, he told me, could not yet quantify how rare functional DNA sequences (genes) and proteins were among all the possible sequences of nucleotide bases and amino acids of a given length. Consequently, they couldn't yet calculate the relevant probabilities - and thus assess the plausibility of random mutation and natural selection as a means of producing new genetic information.

This looks to be on page 309 or 310, but I'm using a Kindle, so I can't be sure. Anywho, when I read that recently, I was all like, "That's what I've been saying!" A few pages later, he repeated basically the same thing. He said,

They also need to know how rare or common functional arrangements of DNA are among all the possible arrangements for a protein of a given length. That's because for genes and proteins, unlike in our bike-lock example, there are many functional cominations of bases and amino acids (as opposed to just one) among the vast number of total combinations. Thus, they need to know the overall ratio of functional to nonfunctional sequences in the DNA.

That's on page 312, I think. A few pages later, Meyer said he met Douglas Axe who had tried to answer this question. Axe determined that functional proteins are extremely rare. Meyer writes,

How rare are they? Axe set out to answer this question using a sampling technique called site-directed mutagenesis. His experiments revealed that, for every one DNA sequence that generates a short functional protein fold of just 150 amino acids in length, there are 1077 nonfunctional combinations - combinations that will not form a stable three-dimentional protein fold capable of performing a specific biological function.

If I'm reading that right, it would mean only 1 in 1077 sequences 150 amino acids long are functional. How many is that? We can figure that out with a ratio.

x20150=11077

So,

x=201501077=2×1073

The probability of not getting a function sequence in 1 try would be,

201502×107320150

And the odds of getting a functional sequence in 8.7 x 1087 tries are,

1(201502×107320150)8.7×1087

Will somebody out there with a fancy schmancy calculator please calculate that and leave a comment with the answer? We can probably simplify it with an approximation. This should give close to the same result:

1(111077)1088

That looks to me like it would give a result close to 1, meaning nearly a 100% chance. I wonder what would happen if I assumed more realistic probablistic resources.

After playing around on my calculator, using more manageable numbers, I noticed that if the outer exponent (e.g. the 1088 in the above equation) is higher than the number in the denominator (e.g. the 1077 in the above equation), the probability is close to 100%, and if it's lower, the probability is close to 0%. It's only when they are close to each other that you get a probability in the 20 to 80% range. Let me see what happens if I take Douglas Axe's word for the 1 in 1077 figure and use more reasonable probablistic resources.

Let's keep the assumption of 2 x 1023 stars in the observable universe. Not all stars are going to have habitable planets because, for example, red dwarf stars are more active and are likely hostile to life. About 70 to 80% of stars are red dwarves. I also suspect that stars living near the centers of galaxies aren't as conducive to life. A generous, but more realistic estimate, for the number of habitable star systems, then, would be half of the total stars, so let's go with that: 1 x 1023. That's a simpler number anyway.

Let's assume all these star systems have 1 planet or moon with amino acids, temperatures, and other conditions capable of supporting life. Now we have 1 x 1023 planets.

Red dwarves live longer than medium sized or ginormous stars, but we've elminated most of them. The more massive a star is, the shorter its life, so the less time there is for life to emerge. Since our star is medium sized, and since they say life has maybe 1 billion years left, let's assume the average planet has 5 billion years in which to produce a functional protein. So instead of calculating the number of seconds in 13.8 billion years, we're going to use the number of seconds in 5 billion years. That's 1.57 x 1017 seconds.

Let's stick with 1 try per second, but this time, we're not going to assume each planet is nothing but amino acids. We'll still make a generous assumption, though. Let's assume 1/4 the mass of earth's oceans are made of amino acids. According to the internet, there are estimated to be 4.64 x 1043 water molecules in earth's oceans. A water molecule is made up 3 atoms, so that's 13.92 x 1043 atoms. We're taking 1/4 of that, so that's 3.48 x 1043 atoms. The average amino acid is made up of 10 atoms, so there are 3.48 x 1042 amino acids. Our target protein this time is 150 amino acids long because we're using Axe's number. So there are 2.32 x 1040 proteins on each planet at any given moment.

Now we can calculate the number of tries.

1trysec2.32×1040proteinsplanet1×1023planets1.57×1017seconds

=3.5×1080protein tries

Our new probability is,

1(111077)3.5×1080

It looks like with the more realistic assumptions, albeit still generous, we still have a probability near 100% that at least one functional protein will be created somewhere in the observable universe. Of course in reality, you need thousands of proteins for life, and you need them all on the same planet, so maybe, just maybe, things will turn out to be unlikely after all.

I'm not totally convinced by Meyer's argument, even if the probability is small, because I don't really understand how Axe came up with this number, and I don't know whether his number is accepted by the community of geneticists and biologists out there. I don't know if there's any controversy about it or whether it's a widely accepted estimate.

I'm a little skeptical for the reasons I explained earlier--the fact that until recently, there was no way to predict how proteins would fold just from knowing the sequence. Whatever method Axe used, it seems like the method I suggested earlier would probably work better. I feel arrogant saying that given how little I know and understand, and I don't mean to sound that way. I'm just expressing what makes sense to me.

There's another issue that's relevant to this whole conversation, and that's how new genes/proteins are formed. There are lots of ways they can come about, and the way they come about should have some bearing on their probabilities. If you were just creating a fresh protein from scratch, the probability of getting a functional sequence would be far less than if you took two already existing functional proteins and spliced them together. Since we already know that each half folds into a stable shape, it's not that unlikely that the combination will also fold into a stable shape.

There are other ways to create new proteins, too. One way, is to cut one in half. Another way is to insert a sequence in an already existing protein. You could even insert a sequence that existed in a different functional protein. You could take a functional protein and delete a section, and it would probably result in a different functional protein. So there are all kinds of ways to get new functional proteins from old ones, and those don't strike me as being nearly as improbable as creating one from scratch.

However, according to what I've read, there are genes and proteins that do emerge seemingly from scratch. They call them de novo genes or orphan genes. They don't have any known precursor. Some of these de novo genes might have precurors that are just lost to biological history. It doesn't mean they didn't exist. But some appear to have somehow emerged from what used to be called the "junk" part of the DNA. In a sense, they did emerge from scratch. It seems to me this argument I'm trying to think through would only apply to de novo genes that emerged from scratch or from the "junk" part of DNA, if there is such a thing.

If there's a section of DNA that doesn't code for proteins or that doesn't serve some other purpose, it should be blind to the forces of natural selection. Natural selection tends to preserve useful sequences and gets rid of harmful sequences. But if there's a sequences that is neither useful nor harmful, then for all practical purposes, it's random. If a gene emerges from a random sequence, then that would be an example of a de novo gene. A de novo gene like that can't be built up over time by making small improvements to an already existing functional sequence. These types of genes must feel the full force of the improbability we tried to calculate earlier. These types of genes used to be thought rare, but it turns out they are more common than once thought.

If anybody ever does the experiments I suggested earlier, here are a couple of things I would like to know.

First, I would like for somebody to pick some length to test, using simluations, AI, or whatever, and get an estimate of what fraction of sequences of that length can form functional proteins.

Second, I would like for somebody to do the same thing with a handful of other lengths. They could maybe test lengths of 20 amino acids long, 50, 100, 150, 200, etc. I would be curious to know if the fraction is the same for each length or if it's different. If it's different, I would like to know whether the fraction increases or decreases with length. Maybe you could plot it on a graph. It would be interesting to know if there's a curve to it. Maybe somebody could come up with an equation to describe the curve and discover a new law of biology or something.

That's where I'm at right now. I would love to hear your thoughts on this subject, so leave a comment.

Here's tomorrow's post on the same subject where I asked ChatGPT to pick my estimates.

Saturday, February 08, 2025

Can a compatibilist use Plantinga's free will defense?

For those not familiar, Alvin Plantinga's free will defense can be found in his book, God, Freedom, and Evil. The free will defense differs from the free will theodicy in the fact that whereas the theodicy is an attempt to say what God's reason is for allowing evil, the free will defense merely offers free will as a possible scenario under which God had a good reason for creating a world containing evil.

Compatibilists are people who think free will and determinism are compatible. Since libertarian free will is an indeterministic model of free will, compatibilists obviously aren't claiming that libertarian free will is compatibible with determinism. They have a different understanding of free will.

According to compatibilists, our choices are determined by our psychological states (e.g. our beliefs, desires, plans, motives, biases, preferences, etc.) Our actions are free so long as we are not being forced, through coersion, physical causation, or brute force, to act contrary to our desires, motives, etc. Compatibilists are determinists. They're just not hard determinists since they aren't claiming that our choices are determined by the laws of nature plus initial conditions in a blind mechanistic way. We do things for reasons rather than because of physical causes.

Plantinga's argument works by implementing what, in logic, is called "Giving a model of S." S is a set of sentences, statements, propositions, or whatever. Giving a model of S is an attempt to show that the set is internally consistent. To do that, you come up with another sentence or set of sentences that, if true, would render all the members of S true. This shows that all the members of S are logically consistent.

The sentence or set of sentences describing the model need not actually be true. They are just a model - a hypothetical scenario - that if true would render all the members of S true.

In the case of the free will defense, S is a set of statements that include (1) Evil exists, and (2) A God exists that is all-knowing, all-powerful, and wholly good. Obviously, if the first statement were the explicit negation of the second statement, they could not both be true because there would be an explicit contradiction. So we just want to know if there's an implicit contradiction between the two statements or if they are consistent with each other.

To do that, we want to create a model of S, i.e. a scenario that, if true, would entail the truth of both sentences in S. Plantinga suggests the proposition that "God created a world containing evil and has a good reason for doing so." Nevermind whether the statement is true or not. The important thing is that if it were true, then it would entail the truth of the two sentences in S. That means the sentences in S are logically consistent.

But before we can use Plantinga's model, we first have to know whether the model itself is even possible. Could it be that God created a world containing evil and had a good reason for doing so? If that's not even possible, then it can't serve as a model of S.

To answer the question, Plantina comes up with a hypothetical scenario in which God does have a good reason for creating a world containing evil. The hypothetical scenario is basically libertarian free will combined with Molinism.

According to Molinism, there are certain counterfactuals of human freedom that limit the possible worlds God can actualize. For example, consider this counterfactual:

If Jim meets Bob, Jim will shake Bob's hand

Now, consider two possible worlds in which Jim and Bob meet.

World 1: Jim and Bob meet, and Jim freely chooses to shake Bob's hand.

World 2: Jim and Bob meet, and Jim freely chooses not to shake Bob's hand.

According to Molinism, the counter-factuals of human freedom are truths about what people would or wouldn't do in given situations, and these truths are logically prior to these people even existing. God's omniscience includes his knowledge of these counter-factuals, so God takes them into account when he decides which possible world to make actual.

If the counter-factual about Jim is true, then any world God actualizes in which Jim and Bob meet will be a world in which Jim freely chooses to shake Bob's hand. That means that even though World 2 is a possible world, it is not a world God is able to actualize.

This is not a blow against God's omnipotence because omnipotence does not include the ability to engage in logical absurdity. While World 2 is logically possible, it is not logically possible for God to actualize World 2 because World 2 is inconsistent with the counterfactual of Jim's freedom. Molinists call such worlds "infeasible." A feasible world is a possible world that God could actualize because it's consistent with the counter-factuals of human freedom. An infeasible world is a possible world that God cannot actualize because it's inconsistent with the counter-factuals of human freedom.

So far, we've talked about a situation in which the counter-factual of Jim's freedom entails that there is a possible world that is not feasible for God to actualize. Suppose, now, that we consider a morally significant choice, like whether to be kind to somebody, whether to steal, etc. It could be that there are counter-factuals such that if God actualizes certain worlds, sin will happen. Suppose, though, that all the counter-factuals of Jim's freedom entail that no matter what world God actualizes in which Jim exists, that Jim will sin in that world. Plantinga calls Jim's condition "transworld depravity." That means given the counter-factuals of Jim's freedom, there is no possible world that is feasible for God to actualize in which Jim does not sin. Plantinga further suggests the possibility that trans-world depravity is something that everybody suffers from.

If that were the case, then it would be impossible for God to actualize any possible world containing free creatures but that does not contain moral evil. There may be all sorts of worlds containing free creatures that never do anything wrong, but if everybody suffers from trans-world depravity, then none of those possible worlds are feasible for God to actualize.

But, you might say, what about natural evil? What about suffering that is not the result of human free will decisions? Easy, says, Plantinga. Natural evil could be the result of the free will decisions of evil spirits. Remember, Plantinga is not claiming that any of this is true. He's just giving a model of S, i.e. a scenario that, if true, would entail the truth of the two sentences in S, namely (1) that evil exists, and (2) that God is all knowing, all powerful, and wholly good.

Remember, the model of S is that God created a world containing evil and had a good reason to do so. The good reason is that there are no feasible worlds containing free creatures that do not sin.

But, you might say, God didn't have to create a world containing free creatures. So even if we grant that there are no feasible worlds containing free creatures that do not contain evil, there still might be worlds without free creatures that do not contain evil. The question, then, would be whether those worlds are better or not.

Those who subscribe to libertarian free will cite multiple reasons for why a world containing free creatures is better than a world without free creatures, even if it means there will be moral evil. Here's a few of them:

1. Libertarian free will is necessary for moral good or evil, so if there were no free creatures, you might be able to eliminate moral evil, but at the same time, you'd be eliminating moral good. As long as the moral good that results from libertarian freedom outweighs the moral bad, a world containing free creatures is better than a world without, even if a world containing free creatures also contains evil and a world without free creatures doesn't.

2. Libertarian freedom is necessary for life to have any meaning. If we are mere puppets on strings and don't make any choices, there's no reason for us to even be sentient. We might as well be philosophical zombies.

3. Libertarian freedom is necessary for love. Love isn't genuine if it's pre-programmed, hard-wired, or causally determined. It's only genuine if people freely love each other.

4. Libertarian freedom is necessary for reasoning. If everything you believe is just the end result of a blind mechanistic series of physical cause and effect, then there's no sense in which those beliefs could be the result of affirming a truth for good reasons. Reasons are irrelevant because, given a set of initial conditions, plus the laws of nature, your current beliefs would be determined to emerge whether there were good reasons for them or not. If you happen to deny free will, that's just because you are being caused by how the chemistry in your brain happens to be fizzing at the moment to deny free will, and the fizzing of your brain is just part of a long causal chain that stretches indefinitely into the past and into the future.

Now we come to the question of whether a compatibilist can use Plantinga's free will defense. After all, compatibilists do not subscribe to libertarian free will. It seems to me there are two things for a compatibilist to consider: (1) Is libertarian freedom even possible, and (2) Is a world with libertarian freedom better than a world without?

Since Plantinga's free will defense isn't offering the free will scenario as the actual answer to why there is evil in the world, and is only offering it as a possibility that, if true, would render S logically consistent, it would appear that a compatibilist need not affirm libertarian free will in order to use Plantinga's argument. Even though a compatibilist may think the free will scenario is false, as long as they grant it as a possible state of affairs, they should be able to use it. If they use it, they are only offering a Model of S to show that the members of S are logically consistent.

Some compatibilists think libertarian free will is at least possible. It's something God could bring about if he wanted to. But there are some compatibilists who think libertarian free will is incoherent. It does not describe a possible state of affairs. For those compatibilists, it would be inconsistent of them to use Plantinga's free will defense. Unless libertarian free will is possible, it cannot serve as a Model of S.

For those compatibilists who think libertarian free will is at least coherent, they have to consider the additional question of whether the world would be better with or without libertarian freedom. If we look at the four reasons above for why a libertarian sees the good in libertarian free will, a compatibilist will probably disagree with all four points. Compatibliists deny libertarian free will but still affirm the reality of good and evil, that life has meaning, that people genuinely love each other, and that we are reasoning creatures capable of having justified beliefs.

Of course it's possible libertarian freedom serves some other good purpose, and a compatibilist could be open to that. It seems to me, though, that a compatibilist should be very skeptical that there is such a purpose (or at least a sufficient one) since the reality of the matter is that God chose to actualize a world without libertarian freedom. The fact that God actualized this world rather than one containing libertarian free will seems to suggest that whatever goods might accompany libertarian free will, they weren't good enough to justify God actualizing a world with libertarian freedom.

One route a compatibilist might take is to grant the epistemological possibility that they are just wrong in all their compatibilist beliefs. Maybe they're wrong to reject the four reasons for why libertarian freedom serves a good purpose. Maybe they're wrong to think libertarianism is incoherent. Maybe they're wrong to think a world without libertarian freedom is better than a world with it. I'm not sure this kind of epistemological possibility is sufficient to justify using Plantinga's free will defense, though. If libertarian free will is incoherent, but the compatibilist just doesn't know it, then libertarian freedom still can't serve as a Model of S. It is not consistent to think libertarian freedom is incoherent while, at the same time, offering it as a Model of S. I'm curious if anybody reading this disagrees with me about that.

A compatibilist may reject libertarian free will and still use Alvin Plantinga's argument. Remember, Plantinga's Model of S consisted of the statement, "God created a world containing evil and had a good reason for doing so." Libertarian free will was only offered as a hypothetical example to show that such a thing is possible. It's possible, because of libertarian freedom, that God could have a good reason for creating a world containing evil. Even though a compatibilist might reject the idea that libertarian freedom serves as a good reason for God creating a world containing evil, they could come up with some other scenario that does the same thing. And, again, the scenario need not be true. It need only be possible.

Here is one possibility.

God himself is the greatest possible good. Everything about him is good. For any good attribute God has, a world in which that attribute gets expressed is better than a world in which it doesn't get expressed. An active good is better than a dormant good. A world where all of God's attributes gets expressed is better than a world in which some of them, though good, never get expressed.

Now, consider some of the attributes God has, like mercy, a willingness to forgive, and wrath against sin. None of these attributes can be expressed if there is nothing to forgive, no occasion to show mercy, and no sin to punish. People may have a hard time wrapping their minds around the idea that punishing sin is a good thing or that it wouldn't be better if there were no sin to punish. But if you accept that everything about God is good, and that God does have wrath against sin, then you have to accept that it is good for God to express wrath against sin.

Under this hypothetical scenario, God's goodness actually entails that evil exist. Evil is necessary for God to give full expression to all his attributes. Expressing his attributes is how God glorifies himself. We glorify God by giving rise to the expression of those attributes. God is glorified in the expression of his mercy towards some, and he is also glorified in the expression of his wrath toward others. If this scenario is possible, then it's possible that God created a world containing evil and has a good reason for doing so. That, then, can be a Model of S showing that the existence of evil is compatible with the existence of an all-knowing, all powerful, and wholly good God.