Friday, February 14, 2025

Protein evolution probability, take three

Wow, this is my third post in a week on this one topic. You'd think I found it interesting or something!

I've been reading around to try to find out how controversial or accepted Douglas Axe's 1 in 1077 functional protein estimate is, and it turns out it's very controversial. There have been other estimates made by other people in which the ratio of functional to non-functional proteins are a lot higher than what Douglas Axe estimated. This paper, for example, estimates that 1 in 1011 proteins are functional. It says,

In conclusion, we suggest that functional proteins are sufficiently common in protein sequence space (roughly 1 in 1011) that they may be discovered by entirely stochastic means, such as presumably operated when proteins were first used by living organisms. However, this frequency is still low enough to emphasize the magnitude of the problem faced by those attempting de novo protein design.

Since this estimate is many orders of magnitude greater than what Douglas Axe estimated, I want to do a rough back-of-the-napkin estimate of what the probability is of getting a functional protein just in the Milky Way Galaxy within 1 billion years and some much stingier probablistic resources than I used in my last couple of posts on this subject (here and here).

I'll assume there are 100 billion stars in the galaxy, 7% are G-type stars, only G-type stars are working on the problem, and only 20% of them have planets in the habitable zones. That's 1.4 x 109 planets working on the problem.

I'll assume the same proportion of carbon, oxygen, hydrogen, and nitrogen in the lithosphere of each planet, but only a small fraction is available to try to make proteins. Instead of taking the elements out of the entire lithosphere, I'll take them out of a volume about the size of Crater Lake.

I asked two different AI's to estimate the mass of the water in Crater Lake. One said about 1013 kg, and the other said about 1012 kg, so let's go with 1012 kg. I'll spare you all the details I didn't spare you last time and just tell you I calculated that there would be 2.5 x 1036 carbon atoms which allows you to make 1.67 x 1011 proteins with 300 amino acids each.

With 1.4 x 109 planets making 1.67 x 1011 proteins per second for 1 billion years (i.e. 3.1536 x 1016 seconds), that comes out to a total of 7.37 x 1036 tries in all. Let's simplify that to 1036 and plug it into our equation to get the probability of finding a functional de novo protein.

\[ \normalsize 1 - \left(1 - \frac{1}{10^{11}}\right)^{10^{36}} \]

There you have it. It looks like you'd be guaranteed to find a functional protein. Again, I have no idea if the estimate for the fraction of functional to non-functional proteins is correct, so I still don't know if these calculations are worth anything. But based on these estimates, it looks like it's very likely you could get de novo proteins, even with stingy probablistic resources, somewhere in the galaxy.

Unless I hear of some solid uncontroversial estimates of the ratio of functional to non-functional proteins of average length, I think I'm probably going to say the argument against evolution from the improbability of de novo protein evolution is not a good argument. It relies too heavily on controversial estimates. It may turn out to be valid if more information comes in, but we'll just have to wait and see. It could also be made valid by taking into consideration more of the details about how proteins are made and how cells work. More knowledge about exo-planets and the chemistry in the early earth may also contribute.

Some final thoughts

I emailed Mr. Pruett, who I mentioned in the first post, to solicit his feedback on that first post. He knows a lot more about this topic than I do. Based on what he said, there's a lot more complications in coming up with probablities than are reflected in my thought experiment. For example, I ignored how genes actually work, including all the machinery needed to build proteins. I ignored the fact that genes can be altered somewhat without altering the resulting protein. There's also the issue of some proteins requiring other proteins in order to fold up correctly. They don't all just fold themselves. A realistic thought experiment, I'm afraid, would be really complicated.

My strategy has been similar to what we used to do in my calculus classes in college. I remember in one of the classes, we had to figure out whether an equation that spits out a series of numbers was convergent or divergent. If the equation is too complicated to figure that out, you can simplify the equation in such a way that you know it's either more or less likely than the original equation to be convergent or divergent. If you're testing for convergence, and you know your simplification is less likely to be convergent than the original equation, but it converges anyway, then you know your original equation is convergent.

Mr. Pruett also pointed out that I over-complicated part of my calculation. I could've just started with 1080 atoms in the universe and figured out how many of them are carbon atoms, and gone from there. I didn't have to talk about star types, habitable planets, lithospheres, etc.

Mr. Pruett made a good point I wish I had considered. I gave very generous time constraints on building proteins, but if I wanted to test de novo genes in already existing species, those appear to pop up pretty quickly in nature. The Cambrian Explosion only lasted maybe 30 million years, and lots of new genes (and their corresponding proteins) had to have come into existence during that short window of time. That's three orders of magnitude less time than my original 13.8 billion year estimate and two orders of magnitude less than my more restricted estimates of 1 to 5 billion years.

Mr. Pruett made an interesting psychological point. Suppose we calculated that it's nearly impossible for the universe to cough up certain functional proteins, but we went out in nature and discovered that they exist. It's unlikely that a biologist would say, "Wow, that's a miracle." It's more likely they would say, "I guess nature is more clever than we thought." When it comes to trying to figure out whether nature could do something on its own or whether it needs divine assistance, our worldview presuppositions are probably going to carry more weight than our calculations.

I'm not saying necessaily that it shouldn't. After all, a person might have good reason for subscribing to their worldview. If I make some calculation that allows me to make a prediction about what I should expect to find in nature, and I go out in nature and find that things are very different, I probably should doubt the assumptions that went into my calculation. I mean that's how science works. You come up with a hypothesis, you make a prediction based on your hypothesis, and you test it by making observations to see if the prediction pans out.

I think what the protein evolution probability argument attempts to do is not test the assumptions that go into the calculation, but to test the worldview of naturalism. If you assume naturalism as part of your hypothesis, and you use various assumptions to make a calculation that predicts something about proteins, and you go out in nature and find out that your prediction was wrong, that is supposed to cast doubt, not on the assumptions that went into your calculation, but on the assumption of your worldview. Somebody who subscribes to naturalism who runs the same experiment and falsifies their prediction is going to questions the assumptions that went into their calculation rather than their naturalistic worldview. And maybe they should. I don't know. I guess at that point it depends whether you're more sure about your worldview or you're more sure about the assumptions that went into your calculations, not to mention your confidence in entering them in your calculator correctly.

Anyway, thank you for joining me on this journey. It's been interesting for me.

Thursday, February 13, 2025

Alvin Plantinga and Sean Carroll are on the same page

I recently read an article by Sean Carroll called "Why Boltzmann Brains Are Bad." What jumped out at me when I read this article was how similar it was to Alvin Plantinga's Evolutionary Argument Against Naturalism (EAAN).

Boltzmann Brains are not in a position to know true from false because all the information that comes their way just fluctuated into being without having any connection with reality. This could happen because the information fluctuated inside their brains, or it could happen because the world in their immediate vicinity fluctuated into existence. Either way, they cannot use their perceptions or any of their tools of reasoning to reliably come to true beliefs about the world.

If you have a model of the universe that predicts you are a Boltzmann Brain, then that model undermines any justification you would have for believing that model. The model is self-stutilfying because as soon as you believe it, for whatever reason, you lose your justification for believing it.

Carroll thinks this is a good reason to reject models that generate Boltzmann Brains. Since Boltzmann Brains are "cognitively unstable," we shouldn't even consider models that generate them. They could still be true, of course. It's just that we could never be justified in believing them since they undermine the reliability of the very process we used to come up with them.

This argument is just like Alvin Plantinga's EAAN. According to Plantinga, if both evolution and naturalism are true, then it's unlikely our brains would be able to reliably distinguish between true and false. Evolution combined with naturalism generates unreliable belief-producing cognitive faculties. So if we believe in both evolution and naturalism, then we have an undercutting defeater for all of our beliefs, including our belief in evolution and naturalism.

In both cases, they are considering models of the world that generate unreliable belief-producing brains, and they are both saying that even though it's possible for such models to be true, we can never be justified in believing them. We shouldn't even consider a model of the world that makes it likely that we can't tell true from false because if we can't tell true from false, then we can't know whether the model is true or false.

Neither of them claim to have proved these models to be false. They only claim to have shown the models are not reasonable to believe or even consider.

Wednesday, February 12, 2025

Fraser Cane against the fine-tuning argument

Fraser Cane, one of my favourite science news commentators on YouTube, recently made a video where he explained why he doesn't think the fine-tuning argument is a good argument (begining at the 4:13 point in the video). He gave a few of the standard responses, and I didn't think any of them were good responses, so I'm going to respond to them.

Most of the universe is uninhabitable

First, he said the universe is only barely habitable. The vast majority of the universe is uninhabitable. First, you have all the vast emptiness of space. Then you have stars that can't support life. Then most planets are also lifeless. Then, only the thin surface of some planets (like earth) are habitable.

This is not a good argument against the fine-tuning argument, and there are a few reasons. One reason is because it doesn't dispute the fact that if you changed any of the laws or constants by a hair, life wouldn't be possible at all. As I explained on on another post, the universe could be fine-tuned for the possibility of life even if there happened not to be any life at all. The existence of just one life form proves that the universe is habitable. If the constants of nature have to be fine-tuned before that could be possible, then the universe is fine-tuned for life even if life is extremely rare.

A second reason that I also mentioned on that post is that even given ideal conditions, the actual emergence of life might be an extremely improbable event. I discussed that in two posts recently where I tried to calculate the probability of getting a functional protein given the vast probablistic resources in the universe. My estimates and assumptions were rough, but based on them, it looks like we should expect the actual emergence of life to be rare. But the fact that it's even possible means the universe is fine-tuned.

A third problem with this argument is that empty space is necessary for habitability. Imagine if the entire universe were filled with a life-friendly atmosphere like here on earth. If that were the case, there would be two major problems. One problem is that there would be too much mass, causing the universe to collapse, ending any chance of life. The second problem is that there couldn't be any stable orbits. You need empty space so there isn't friction when planets orbit stars and stars orbit galaxies.

The universe would have to be habitable for us to be observing it.

The second point he makes is the anthropic principle. The universe would have to be habitable for us to be here observing it.

This is not a good response to fine-tuning either. If a thousand people aimed their rifles at me and fired, but they all missed, nobody would say, "There's nothing remarkable about the fact that you're alive since you'd have to be alive to consider whether there's anything remarkable going on." Of course it would be remarkable if I were alive! Me being alive would require an explanation because of how unlikely it would be for me to survive that many people shooting at me.

The anthropic principle is a version of the observer-selection effect. The observer selection effect would explain why we find ourselves in a habitable universe rather than an uninhabitable universe if we assumed both kinds existed (e.g. if we assumed a multiverse with random combinations of laws and constants). If there is a multiverse, and the vast majority of univereses were uninhabitable, the anthropic principle would explain why we find ourselves in one that's habitable. It's because a habitable universe is the only kind of univeres that can be observed. All observers observe habitable universes.

The anthropic principle only works as an explanation of fine-tuning if you combine it with a multiverse. But Fraser doesn't even suggest a multivere. If there are far more ways the universe could've been uninhabitable than there are for the universe to have been habitable, and there's just one universe, then the probability is that the one universe would be uninhabitable. The fact that we're alive at all shows that the universe is habitable. That requires an explanation just as being alive in the firing squan analogy requires an explanation. Why has the most unlikely thing happened? It won't do to dismiss the question on the basis that if it hadn't happened, we wouldn't be around to wonder about it.

I said more about this argument, including the puddle analogy that's often invoked, here.

Any universe is improbable.

A third thing he said was that if you threw a dart out of an airplane, no matter where the dart lands, it's improbable that it would've landed at that particular spot.

That's an argument I used to have as a college freshman against teleological arguments in general, but that's a terrible argument. As somebody who has a basic understanding of entropy and the second law of thermodynamics, Fraser ought to know better. While any random arrangement of parts in a closed system is equally improbable, there are certain kinds of arragements that are less probable than other kinds. Some are ordered kinds, and some are random kinds. To use an analogy, imagine dumping a box of alphabet cereal on the floor. Any random arrangement is equally improbable, but there are certain kinds of arragements (namely, the kind that spell words and sentences) that are far less probable than other kinds (namely, the kinds that don't spell words or sentences). In the same way, any random combination of values for the constants of nature might be equally improbable, but the combinations that result in habitable universes are extremely rare. That's the real issue.

We just don't know why the universe is habitable.

The fourth thing he said was that the fine-tuning argument shuts down scientific inquiry. If we don't know why the universe appears to be fine-tuned for life, we should say, "I don't know," and try to find out instead of suggesting God did it.

The problem with this argument is that it begs the question against a theistic explanation. It just assumes theism is the wrong answer. This is a response one could use against any hypothesis.

Consider the big bang as an explanation for the CMBR and the red shift of distant galaxies. One could just as easily invoke Fraser's argument and say, "I don't know why there's a CMBR or why there's a red shift to distant galaxies" instead of suggesting a big bang did it. You could run the same argument against cosmic inflation.

If Fraser doesn't think God is a good explanation, he needs to say specifically why. Is there a better explanation? Is God an insufficient explanation? Is there some reason to think God doesn't exist? Any of these could be a good reponse, but that's not where Fraser goes.

Imagine applying the same reasoning to an alleged crime scene. You look around and see what appears to be evidence that a murder took place, but your supervisor says, "Hey, if you don't know how the person died, then don't just assume a murderer did it. You should suspend judgment until you find out what did cause the death." Well, if everything at the crime scene points to a murderer, then that's what you should think is the explanation.

If you have good reason to think you've identified the correct explanation for some observation, then you're perfectly within your rights in concluding that your explanation is correct. There's no reason to say, "I don't know," and wait for the alledged right explanation as if you don't already have the right explanation.

Coming to a conclusion about the correct explanation for your observations doesn't mean you shut down inquiry. You can hold your belief provisionally and be open to changing your view if new information comes along. But you don't need to suspend judgment when you have evidence that points to a particular explanation.

Notice that nobody ever says the same sort of thing about any other explanation besides God. When they came up with the dark matter as an explanation for flat galaxy rotation curves, nobody said, "Don't use dark matter to explain galaxy rotation because that shuts down scientific inquiry. Instead, hold out for a better explanation." Dark matter didn't put an end to inquiry. People still proposed other explanations, like Modified Newtonian Dynamics (MOND). Whether you think the correct explanation for flat galaxy rotation curves is MOND or dark matter, you are free to be open to new information that might point to a different explanation. Having an exlanation doesn't shut down further inquiry.

Science is provisional. So is every field of inquiry. We make our best conclusions based on the evidence that's available to us. We don't withhold judgment about every single conclusion we come to merely on the basis that it's possible some new piece of information will come along in the future that overturns what we previously thought we knew. We don't stop investigating the world or testing what we think we know just because we think we already have the right answers. So there's no reason in the world to think that belief in God as the explanation for fine-tuning will put a stop to scientific inquiry.

Tuesday, February 11, 2025

Functional protein probabilities using ChatGPT's estimates

Yesterday, I made a post talking about the probability of one functional protein 200 amino acids long being formed through undirected processes somewhere in the observable universe. I had to make a lot of guesses, but to give our protein its best chance, I made very generous estimates. Based on my estimates, I calculated a near 100% probability of the universe spitting out at least one functional protein 200 amino acids long.

Today, I thought I'd see what ChatGPT would say. I'll use the same probability equation, and the same line of reasoning, but I'll let ChatGPT come up with my estimates for me. Whenever ChatGPT gives a range, I'll use the upper end of the range (with one exception). Here's what ChatGPT said:

Stars in the universe: 1 x 1021

Fraction of stars that could host planets in the habitable zone: 25%

How much carbon, nitrogen, hydrogen, and oxygen are on an average planet like earth?:

For rocky planets like earth. . .

H: 2%
C: 0.5%
N: 0.3%
O: 50%

ChatGPT didn't say, but I'm going to assume those percentages are by mass. It looks like ChatGPT is just considering earth's crust, too, which is good. That's what I want.

I wanted to know which of these would be the limiting factor, so I asked ChatGPT how many of each atom we would have if we took one of each of the 20 usual amino acids and added up all the hydrogen, carbon, nitrogen, and oxygen in them. A couple of them have Sulpher, but I'm going to ignore that for simplicity. ChatGPT said,

C: 101
H: 161
N: 29
O: 49

It looks like either carbon or hydrogen is going to be the limiting factor. Let's go with carbon.

What is the average length of a protein?

ChatGPT said 300 to 400. This time, I'm going to go with that lower limit of 300.

What is the average lifespan of a star?

Chat GPT gave three estimates--one for red dwarves, one for high mass stars, and one for sun-like stars. The red dwarves live a really long time, but are mostly uninhabitable because of how active they are, and massive stars don't live very long at all, so I'm just going to go with sun-like stars. The average there is 10 billion years.

It seems unreasonable to use the entirety of earth's mass in my calculation because proteins aren't going to form in the mantel or in earth's core. So I asked ChatGPT how much of earth's mass makes up the lithosphere. ChatGPT said 1 to 2%, so I'm going to go with 2%. Earth's mass is 5.7 x 1024, so the lithosphere must be 1.14 x 1023 kg.

Let's do some calculations.

First, I'm still going to assume 1 try per second.

I'm going to assume all the amino acids are in one big soup.

The mass of the earth's lithosphere is 1.14 x 1023 kg. 0.5% of that is carbon, so there's 5.7 x 1020 kg of carbon in the lithosphere. An average carbon atom weights 1.99 x 10-26 kg, so there are about 2.86 x 1046 carbon atoms in the lithosphere.

You need 101 carbon atoms for a full set of the 20 standard amino acids, so with those carbon atoms, you can create 2.83 x 1044 full sets. Each set has 20 amino acids, so that's 1.42 x 1043 individual amino acids per planet.

An average protein has 300 amino acids, so that's 4.73 x 1040 proteins per planet. That's going to be the number of tries per second per planet.

There are 1 x 1021 stars, and 25% of them have planets in the habital zone, so that's 2.5 x 1020 planets in the habitable zone.

Although earth has 10 billion years, only 5 billion of that will have life on it. Proteins need to form in a shorter span than 5 billion years if there are to be multiple species and diversity, so I'm going to give each planet 2 billion years to create an average protein. That's 6.31 x 1016 seconds.

Now, I think we can calculate the number of tries.

(1 try per/sec) x (6.31 x 1016 sec) x (4.73 x 1040 proteins/planet) x (2.5 x 1020 planets) = 7.46 x 1077 protein tries. This is getting interesting.

Now, we can plug that into our equation using the Douglas Axe estimate of 1 functional protein for every 1077 proteins of a given length. He used 150 amino acids, but I'm assuming the fraction is the same for all lengths.

\[ \normalsize 1 - \left(1 - \frac{1}{10^{77}}\right)^{7.46 \times 10^{77}} \]

The exponent is pretty close to the denominator, so we could get a real probability here. Since I can't put those huge exponents in my calculator, I played around. I tried replacing the 1077 in both places with 2, 10, 100, 1000, and 1,000,000. I got pretty close to the same result each time, so I'll bet that's what it is. The probability came out to be 99.9%, which means you'd be practically guaranteed to get a functional protein.

It is possible that I made a math error. I've gone through and corrected myself two or three times since posting this, so there's a possibility I could go through it again and find another mistake.

A lot of these numbers are speculative. I guess you can get whatever probability you want depending on how you massage the numbers. You can be generous or stingy with your assumptions. As I said in the last post, I think the pivotal unknown is the fraction of proteins of a given length that could be functional out of all the possible sequences of amino acids in a given length. I suggested in the last post how we might be able to figure that out with the new AlphFold AI thingy. Since nobody has done it, as far as I know, I used Douglas Axe's estimate, which, as I explained in the last post, I'm not so sure about.

One thing I learned in this whole thing is that if you're just looking for any functional protein, the length of the protein doesn't figure into the probability (except when you're determining how many proteins you're going to get per planet with your available amino acids). All that matters is what fraction of proteins of any given length will be functional. That fraction may, for all we know, be the same regardless of length. But like I said in the last post, we don't necessarily know that the fraction is the same in all lengths. The only way to figure that out is through experimentation or simulation. Assuming it's the same for all lengths, the length only figures into the probability if you're looking for one particular sequence of that length. Then the length matters a great deal to the probability.

You could make the length relevant if you considered the probability of different lengths with any sequence. It does seems like the longer a sequence is, the less probable it is. On the other hand, that may have a lot to do with how it is formed. If you had two proteins 200 units long, and they merged in one event, you'd have one 400 units long. That would be easier than if you had one 200 units long and it mutated through successive generations until it grew to 400 units. It's probably simpler to leave this probability out.

One interesting thing I took from this is that if you ignore the 1 x 1021 stars in the universe and all the planets surrounding them, and you focused only on earth, the probability of getting any functional protein on earth would be almost non-existent. But if you include the whole observable universe, then you're guaranteed to get the functional protein somewhere in the universe. So there's a sense in which we really did win the lottery here on earth.

That's assuming, of course, that there's some validity to my thought experiment. It is, admittedly, speculative. It uses a lot of really rough estimates and simplifications. If there is some validity to it, it would answer the Fermi paradox. Life in the universe is extremely rare. Advanced intelligent life like ours even more so.

Wait! There's more! I wrote a third post on this topic after looking further into estimates for functional to non-functional amino acid sequences and after getting some feedback from Paul Scott Pruett.

Monday, February 10, 2025

Evolution's mathematical obstacle

There are a few equations I put in this post using a new trick I learned recently. Sometimes, the equations look really tiny. If that happens to you, just hit the refresh button, and they should be big again. The issue may just be my browser.

One of the most interesting things I've read or heard about concerning the mathematical obstacles to evolution is an argument that says the probability of getting just one functional protein of average length in the entire history of the universe, even given unrealistically generous probablistic resources, is so vanishingly small, that it's not reasonable to believe that new proteins could form through undirected natural processes.

One particularly good presentation of an argument like this is "The Statistical Case Against Evolution" by Paul Scott Pruett. He notes that there are examples of convergent evolution, not just on the macro scale, but on the scale of genes and proteins as well. That means nature seems to aim at particular target proteins. The odds of getting particular target proteins are much smaller than the odds of getting just any functional protein, yet nature seems to produce the same target proteins over and over.

I also saw this video on YouTube, but based on what Scott told me, it's a little sketchy. Scott thought some of his assumptions were either too generous or just arbitrary. I have issues with this presentation, too, but it's at least easy to understand.

I have been skeptical of this argument for a number of reasons, but I thought it might be interesting for me to try to work through the line of reasoning myself and see what I come up with. While trying to work through it, I came up against some unanswered questions that prevented me from completing the argument. I've put this blog post on hold and revisited it from time to time over the last few years, but now I thought I'd just make a post about where I'm at. Maybe if I posted about my unanswered questions and why I think they are relevant, somebody will have something to say about them.

So, here we go.

An amino acid is an organic molecule made of hydrogen, carbon, oxygen, and nitrogen. There are 20 different kinds of amino acids that make up the proteins in all life on earth. Proteins are strings of amino acids. Depending on the length of these sequences and their order, proteins can be folded into stable shapes. The shapes of the proteins are what give them their function. You can think of them like car parts.

The amino acids can be strung together in any order. You can think of them like letters in the alphabet. You can string a bunch of letters together in any order. Just as some of those arragements will produce gibberish while other arrangements will produce coherent words and sentences, so also some sequences of amino acids can be folded into stable functional proteins, and others cannot.

Proteins come in different lengths, but the average protein is about 200 amino acids long. Since each position along the string could contain any of 20 different amino acids, there are 20200 possible sequences in a protein that's 200 amino acids long.

If you were to randomly pick out a sequence of amino acids 200 units long, the odds that you would get any one particular sequence would be 1 in 20200, which is pretty small. However, there are more things to consider in this argument, which brings me to some of my unanswered questions.

Any low probability can be overcome if you have enough chances for it to happen. If you were trying to guess the combination of a lock, you may have a 1 in a million chance of making the right guess on the first try, but if you tried a million times, you'd have a good chance of guessing correctly in at least one of those tries.

Let's suppose you want to aim for a particular sequence of amino acids 200 units long. You start on day one at the beginning of the universe, and you do one try per second continuously until today. There's no need to be precise, so roughly. . .

13.8 billion years x 365 days/year x 24 hours/day x 60 minutes/hour x 60 seconds/minute = 4.35 x 1017 seconds

With that being the case, what are the chances of getting the correct sequence if you made one attempt every second for 13.8 billion years?

Let me make a detour here and clarify something I used to be confused about.

If you have a six sided dice, and you rolled it, the odds of getting any given number would be 1 in 6, right? So you'd think that if you rolled it six times, the odds of getting any given number would be 1. In other words you'd be guaranteed to get the right number. But that obviously isn't right because it's possible to roll it six times and never get a 2. So here's the correct way to figure out the probability of getting a 2 if you rolled it six times.

Each time you roll the dice, you have a 5 in 6 chance of not getting a 2. So if you roll it six times, the probability of not getting a 2 on all six rolls can be given by,

\[ \normalsize \left(\frac{5}{6}\right)^6 \]

Since that's the probability of not getting a 2 in the six rolls, you can subtract that number from 1 to get the probability that you will get a 2.

\[ \normalsize 1 - \left(\frac{5}{6}\right)^6 \]

That comes out to a 66.5% chance, or about 1 in 1.5, which is obviously lower than a 100% chance.

Now, let's apply that same reasoning to figure out what the chances are of getting our target protein. Since the chances of getting the right sequence in one try is 1 in 20200, the chance of not getting the right sequence is,

\[ \normalsize \frac{20^{200} - 1}{20^{200}} \]

And the chances of getting the right sequence in 4.35 x 1017 tries is,

\[ \normalsize 1 - \left(\frac{20^{200} - 1}{20^{200}}\right)^{4.35 \times 10^{17}} \]

Unfortunately, my dinky calculator can't handle those kinds of numbers, but if you think about it, the probability is really small. That means if you were to make one random attempt every second for 13.8 billion years to get a particular sequence of amino acids 200 units long, there's almost no chance that it would happen. But I would love to see the actual number.

We can improve these odds, though. We know that in reality, there could be tries happening simultaneously all over the universe each second (relativity of simultaneity notwithstanding). Let's imagine some really generous probablistic resources to improve our odds.

The internet estimates that there are 1050 atoms in the earth. Let's suppose all these atoms are just hydrogen, carbon, oxygen, and nitrogen, that they are currently part of amino acid molecules, and that they are all joining in the effort to make our target protein. The average amino acid contains 10 atoms, so there would be about 1049 amino acids that make up the earth.

\[ \normalsize \frac{10^{50} \, \text{atoms}}{10 \, \text{atoms/amino acid}} = 10^{49} \, \text{amino acids} \]

The internet also estimates that there are 200 billion trillion stars in the universe. That's 200,000,000,000 x 1,000,000,000,000 = 2 x 1023 stars. Let's imagine there are two earth-like planets for each star, and they have all existed for the entirety of the 13.8 billion years of the universe. That's 4 x 1023 earth-like planets, all trying to make this one protein.

In that case, there would be 4 x 1072 amino acids available to make proteins.

\[ \normalsize 10^{49} \, \text{amino acids/planet} \cdot 4 \times 10^{23} \, \text{planets}= 4 \times 10^{72} \, \text{amino acids} \]

Since each try uses up 200 amino acids, there are 2 x 1070 tries going on each second.

\[ \normalsize \frac{4 \times 10^{72} \, \text{amino acids}}{200 \, \text{amino acids/try}} = 2 \times 10^{70} \, \text{tries} \]

Since we already figured out that there are 4.35 x 1017 seconds in 13.8 billion years, that means there are 8.7 x 1087 tries in the history of the universe.

\[ \normalsize 2 \times 10^{70} \, \text{tries/sec} \cdot 4.35 \times 10^{17} \, \text{sec} = 8.7 \times 10^{87} \, \text{tries} \]

Now we can adjust the original probability we got to account for all these generous probablistic resource. Now, we get,

\[ \normalsize 1 - \left(\frac{20^{200} - 1}{20^{200}}\right)^{8.7 \times 10^{87}} \]

That's an improvement, and although my dinky calculator can't give you the actual number, you should be able to tell that it's still an extremely small number. Look at that fraction. The numerator and denominator are almost exactly the same because if you subtract 1 from a number as big as 20200, you haven't subtracted much, relatively speaking. That means the fraction is extremely close to 1, which means that number raised to 1087, though smaller, is still going to be very close to 1. And that means 1 minus that number is going to be very close to zero. And that means there's nearly a zero percent chance of getting the target protein.

Up until now, we have only been trying to calculate the odds of getting one specific sequence of amino acids 200 units long. But if we are just trying to find out what the odds are of getting any functional protein given the same probablistic resources, our odds should greatly improve. The reason is because for any sequence of amino acids of some given length, there is more than one sequence that could be functional.

There are two things necessary for a protein to be functional. First, it needs to be able to fold up into a particular shape and hold that shape. Second, it needs to exist in an environment in which it serves a purpose. I'm going to ignore that second requirement for the sake of this thought experiment because that would complicate things. Whether a protein serves a purpose depends on the shape of every other protein in its evironment. For the purposes of this thought experiment, I'm going to assume that any protein that can fold up into a stable shape has the potential to be functional. I just want to know what the odds are of getting any potentially functional protein in the history of the universe given our generous probablistic resources.

It is at this point in the game that I have run up against a wall. To continue the thought experiment, I need to know, out of all the 20200 possible sequences of amino acids in my protein, what fraction of them are capable of folding up into a stable shape.

We know already that you can alter a few of the amino acids in a sequence and still end up with the same functional protein. If that weren't the case, we'd all be genetically identical. It is our genes that store the information to build our proteins. Two people can have the same gene that codes for the same protein, but there will be slight differences between them. Those differences are what make us genetically unique. It's why DNA evidence is useful in criminal investigations. It's also why 23andME can find your relatives. The closer the relation, the more similar the DNA sequence.

Besides variations in the same protein, you can have completely different proteins (i.e. proteins that fold into a different shape and perform a different function) that are the same length, or close to the same length.

If all we looked at were the proteins that exist in nature, almost all of them are functional. Otherwise, nature wouldn't have preserved them. So we can't just look at the existing proteins to estimate how many sequences in 20200 could be functional. I've seen people make that mistake.

It would be great if we could build that many and just see for ourselves what fraction of them fold into stable shapes. But 20200 is too many, and they're not easy to make anyway. Another way is to use computer simulations. We could just have a computer predict how they would fold.

Predicting how a sequence of amino acids will fold up has been a nortoriously difficult problem for a while now. Veritasium recently posted a video about it you should check out. Mithuna Yoganathan at the Looking Glass Universe channel also made a video about it a while back. The good news is that it looks like, thanks to AI, the notorious protein folding problem has been solved. AI can now predict, with 90% accuracy, how a given string of amino acids 30 units long will fold up. Until this breakthough came along, I don't see how anybody could possibly know what fraction of proteins of a given length could fold up into stable shapes. Now, it looks like it's possible to figure it out.

How would they do it, though? One way would be to try every sequence. There's probably not enough computing power for that, though. It might work if you were only considering sequences 10 or 20 units long, but if you try 200 units long, no computer has that kind of power.

Another way is to try a representative sample size and extrapolate. Maybe they could try a million random sequences to see what fraction of them fold up into stable shapes. Then they could try another million and see if they get the same fraction. If they do, then they can extrapolate to the whole 20200 possibilities and estimate the fraction of them that can make functional proteins. Will somebody out there please try this? I would love to know.

I was recently reading Stephen Meyer's book, The Return of the God Hypothesis, and I was relieved to see that Meyer addressed this issue I was having. This exerpt gave me hope that I was at least thinking it through correctly. He said,

Nevertheless, when I first met Denton, he told me that it was not yet possible to make a conclusive mathematical determination of the plausiblility of a random mutational search for new functional genes and proteins. Molecular biologists, he told me, could not yet quantify how rare functional DNA sequences (genes) and proteins were among all the possible sequences of nucleotide bases and amino acids of a given length. Consequently, they couldn't yet calculate the relevant probabilities - and thus assess the plausibility of random mutation and natural selection as a means of producing new genetic information.

This looks to be on page 309 or 310, but I'm using a Kindle, so I can't be sure. Anywho, when I read that recently, I was all like, "That's what I've been saying!" A few pages later, he repeated basically the same thing. He said,

They also need to know how rare or common functional arrangements of DNA are among all the possible arrangements for a protein of a given length. That's because for genes and proteins, unlike in our bike-lock example, there are many functional cominations of bases and amino acids (as opposed to just one) among the vast number of total combinations. Thus, they need to know the overall ratio of functional to nonfunctional sequences in the DNA.

That's on page 312, I think. A few pages later, Meyer said he met Douglas Axe who had tried to answer this question. Axe determined that functional proteins are extremely rare. Meyer writes,

How rare are they? Axe set out to answer this question using a sampling technique called site-directed mutagenesis. His experiments revealed that, for every one DNA sequence that generates a short functional protein fold of just 150 amino acids in length, there are 1077 nonfunctional combinations - combinations that will not form a stable three-dimentional protein fold capable of performing a specific biological function.

If I'm reading that right, it would mean only 1 in 1077 sequences 150 amino acids long are functional. How many is that? We can figure that out with a ratio.

\[ \normalsize \frac{x}{20^{150}} = \frac{1}{10^{77}} \]

So,

\[ \normalsize x = \frac{20^{150}}{10^{77}} = 2 \times 10^{73} \]

The probability of not getting a function sequence in 1 try would be,

\[ \normalsize \frac{20^{150} - 2 \times 10^{73}}{20^{150}} \]

And the odds of getting a functional sequence in 8.7 x 1087 tries are,

\[ \normalsize 1 - \left(\frac{20^{150} - 2 \times 10^{73}}{20^{150}}\right)^{8.7 \times 10^{87}} \]

Will somebody out there with a fancy schmancy calculator please calculate that and leave a comment with the answer? We can probably simplify it with an approximation. This should give close to the same result:

\[ \normalsize 1 - \left(1 - \frac{1}{10^{77}}\right)^{10^{88}} \]

That looks to me like it would give a result close to 1, meaning nearly a 100% chance. I wonder what would happen if I assumed more realistic probablistic resources.

After playing around on my calculator, using more manageable numbers, I noticed that if the outer exponent (e.g. the 1088 in the above equation) is higher than the number in the denominator (e.g. the 1077 in the above equation), the probability is close to 100%, and if it's lower, the probability is close to 0%. It's only when they are close to each other that you get a probability in the 20 to 80% range. Let me see what happens if I take Douglas Axe's word for the 1 in 1077 figure and use more reasonable probablistic resources.

Let's keep the assumption of 2 x 1023 stars in the observable universe. Not all stars are going to have habitable planets because, for example, red dwarf stars are more active and are likely hostile to life. About 70 to 80% of stars are red dwarves. I also suspect that stars living near the centers of galaxies aren't as conducive to life. A generous, but more realistic estimate, for the number of habitable star systems, then, would be half of the total stars, so let's go with that: 1 x 1023. That's a simpler number anyway.

Let's assume all these star systems have 1 planet or moon with amino acids, temperatures, and other conditions capable of supporting life. Now we have 1 x 1023 planets.

Red dwarves live longer than medium sized or ginormous stars, but we've elminated most of them. The more massive a star is, the shorter its life, so the less time there is for life to emerge. Since our star is medium sized, and since they say life has maybe 1 billion years left, let's assume the average planet has 5 billion years in which to produce a functional protein. So instead of calculating the number of seconds in 13.8 billion years, we're going to use the number of seconds in 5 billion years. That's 1.57 x 1017 seconds.

Let's stick with 1 try per second, but this time, we're not going to assume each planet is nothing but amino acids. We'll still make a generous assumption, though. Let's assume 1/4 the mass of earth's oceans are made of amino acids. According to the internet, there are estimated to be 4.64 x 1043 water molecules in earth's oceans. A water molecule is made up 3 atoms, so that's 13.92 x 1043 atoms. We're taking 1/4 of that, so that's 3.48 x 1043 atoms. The average amino acid is made up of 10 atoms, so there are 3.48 x 1042 amino acids. Our target protein this time is 150 amino acids long because we're using Axe's number. So there are 2.32 x 1040 proteins on each planet at any given moment.

Now we can calculate the number of tries.

\[ \normalsize 1 \frac{\text{try}}{\text{sec}} \cdot 2.32 \times 10^{40} \frac{\text{proteins}}{\text{planet}} \cdot 1 \times 10^{23} \, \text{planets} \cdot 1.57 \times 10^{17} \text{seconds} \]

\[ \normalsize = 3.5 \times 10^{80} \, \text{protein tries} \]

Our new probability is,

\[ \normalsize 1 - \left(1 - \frac{1}{10^{77}}\right)^{3.5 \times 10^{80}} \]

It looks like with the more realistic assumptions, albeit still generous, we still have a probability near 100% that at least one functional protein will be created somewhere in the observable universe. Of course in reality, you need thousands of proteins for life, and you need them all on the same planet, so maybe, just maybe, things will turn out to be unlikely after all.

I'm not totally convinced by Meyer's argument, even if the probability is small, because I don't really understand how Axe came up with this number, and I don't know whether his number is accepted by the community of geneticists and biologists out there. I don't know if there's any controversy about it or whether it's a widely accepted estimate.

I'm a little skeptical for the reasons I explained earlier--the fact that until recently, there was no way to predict how proteins would fold just from knowing the sequence. Whatever method Axe used, it seems like the method I suggested earlier would probably work better. I feel arrogant saying that given how little I know and understand, and I don't mean to sound that way. I'm just expressing what makes sense to me.

There's another issue that's relevant to this whole conversation, and that's how new genes/proteins are formed. There are lots of ways they can come about, and the way they come about should have some bearing on their probabilities. If you were just creating a fresh protein from scratch, the probability of getting a functional sequence would be far less than if you took two already existing functional proteins and spliced them together. Since we already know that each half folds into a stable shape, it's not that unlikely that the combination will also fold into a stable shape.

There are other ways to create new proteins, too. One way, is to cut one in half. Another way is to insert a sequence in an already existing protein. You could even insert a sequence that existed in a different functional protein. You could take a functional protein and delete a section, and it would probably result in a different functional protein. So there are all kinds of ways to get new functional proteins from old ones, and those don't strike me as being nearly as improbable as creating one from scratch.

However, according to what I've read, there are genes and proteins that do emerge seemingly from scratch. They call them de novo genes or orphan genes. They don't have any known precursor. Some of these de novo genes might have precurors that are just lost to biological history. It doesn't mean they didn't exist. But some appear to have somehow emerged from what used to be called the "junk" part of the DNA. In a sense, they did emerge from scratch. It seems to me this argument I'm trying to think through would only apply to de novo genes that emerged from scratch or from the "junk" part of DNA, if there is such a thing.

If there's a section of DNA that doesn't code for proteins or that doesn't serve some other purpose, it should be blind to the forces of natural selection. Natural selection tends to preserve useful sequences and gets rid of harmful sequences. But if there's a sequences that is neither useful nor harmful, then for all practical purposes, it's random. If a gene emerges from a random sequence, then that would be an example of a de novo gene. A de novo gene like that can't be built up over time by making small improvements to an already existing functional sequence. These types of genes must feel the full force of the improbability we tried to calculate earlier. These types of genes used to be thought rare, but it turns out they are more common than once thought.

If anybody ever does the experiments I suggested earlier, here are a couple of things I would like to know.

First, I would like for somebody to pick some length to test, using simluations, AI, or whatever, and get an estimate of what fraction of sequences of that length can form functional proteins.

Second, I would like for somebody to do the same thing with a handful of other lengths. They could maybe test lengths of 20 amino acids long, 50, 100, 150, 200, etc. I would be curious to know if the fraction is the same for each length or if it's different. If it's different, I would like to know whether the fraction increases or decreases with length. Maybe you could plot it on a graph. It would be interesting to know if there's a curve to it. Maybe somebody could come up with an equation to describe the curve and discover a new law of biology or something.

That's where I'm at right now. I would love to hear your thoughts on this subject, so leave a comment.

Here's tomorrow's post on the same subject where I asked ChatGPT to pick my estimates.

Saturday, February 08, 2025

Can a compatibilist use Plantinga's free will defense?

For those not familiar, Alvin Plantinga's free will defense can be found in his book, God, Freedom, and Evil. The free will defense differs from the free will theodicy in the fact that whereas the theodicy is an attempt to say what God's reason is for allowing evil, the free will defense merely offers free will as a possible scenario under which God had a good reason for creating a world containing evil.

Compatibilists are people who think free will and determinism are compatible. Since libertarian free will is an indeterministic model of free will, compatibilists obviously aren't claiming that libertarian free will is compatibible with determinism. They have a different understanding of free will.

According to compatibilists, our choices are determined by our psychological states (e.g. our beliefs, desires, plans, motives, biases, preferences, etc.) Our actions are free so long as we are not being forced, through coersion, physical causation, or brute force, to act contrary to our desires, motives, etc. Compatibilists are determinists. They're just not hard determinists since they aren't claiming that our choices are determined by the laws of nature plus initial conditions in a blind mechanistic way. We do things for reasons rather than because of physical causes.

Plantinga's argument works by implementing what, in logic, is called "Giving a model of S." S is a set of sentences, statements, propositions, or whatever. Giving a model of S is an attempt to show that the set is internally consistent. To do that, you come up with another sentence or set of sentences that, if true, would render all the members of S true. This shows that all the members of S are logically consistent.

The sentence or set of sentences describing the model need not actually be true. They are just a model - a hypothetical scenario - that if true would render all the members of S true.

In the case of the free will defense, S is a set of statements that include (1) Evil exists, and (2) A God exists that is all-knowing, all-powerful, and wholly good. Obviously, if the first statement were the explicit negation of the second statement, they could not both be true because there would be an explicit contradiction. So we just want to know if there's an implicit contradiction between the two statements or if they are consistent with each other.

To do that, we want to create a model of S, i.e. a scenario that, if true, would entail the truth of both sentences in S. Plantinga suggests the proposition that "God created a world containing evil and has a good reason for doing so." Nevermind whether the statement is true or not. The important thing is that if it were true, then it would entail the truth of the two sentences in S. That means the sentences in S are logically consistent.

But before we can use Plantinga's model, we first have to know whether the model itself is even possible. Could it be that God created a world containing evil and had a good reason for doing so? If that's not even possible, then it can't serve as a model of S.

To answer the question, Plantina comes up with a hypothetical scenario in which God does have a good reason for creating a world containing evil. The hypothetical scenario is basically libertarian free will combined with Molinism.

According to Molinism, there are certain counterfactuals of human freedom that limit the possible worlds God can actualize. For example, consider this counterfactual:

If Jim meets Bob, Jim will shake Bob's hand

Now, consider two possible worlds in which Jim and Bob meet.

World 1: Jim and Bob meet, and Jim freely chooses to shake Bob's hand.

World 2: Jim and Bob meet, and Jim freely chooses not to shake Bob's hand.

According to Molinism, the counter-factuals of human freedom are truths about what people would or wouldn't do in given situations, and these truths are logically prior to these people even existing. God's omniscience includes his knowledge of these counter-factuals, so God takes them into account when he decides which possible world to make actual.

If the counter-factual about Jim is true, then any world God actualizes in which Jim and Bob meet will be a world in which Jim freely chooses to shake Bob's hand. That means that even though World 2 is a possible world, it is not a world God is able to actualize.

This is not a blow against God's omnipotence because omnipotence does not include the ability to engage in logical absurdity. While World 2 is logically possible, it is not logically possible for God to actualize World 2 because World 2 is inconsistent with the counterfactual of Jim's freedom. Molinists call such worlds "infeasible." A feasible world is a possible world that God could actualize because it's consistent with the counter-factuals of human freedom. An infeasible world is a possible world that God cannot actualize because it's inconsistent with the counter-factuals of human freedom.

So far, we've talked about a situation in which the counter-factual of Jim's freedom entails that there is a possible world that is not feasible for God to actualize. Suppose, now, that we consider a morally significant choice, like whether to be kind to somebody, whether to steal, etc. It could be that there are counter-factuals such that if God actualizes certain worlds, sin will happen. Suppose, though, that all the counter-factuals of Jim's freedom entail that no matter what world God actualizes in which Jim exists, that Jim will sin in that world. Plantinga calls this "transworld depravity." That means there is no possible world that is feasible for God to actualize in which Jim does not sin. Plantinga further suggests the possibility that trans-world depravity is something that everybody suffers from.

If that were the case, then it would be impossible for God to actualize any possible world containing free creatures but that does not contain moral evil. There may be all sorts of worlds containing free creatures that never do anything wrong, but if everybody suffers from trans-world depravity, then none of those possible worlds are feasible for God to actualize.

But, you might say, what about natural evil? What about suffering that is not the result of human free will decisions? Easy, says, Plantinga. Natural evil could be the result of the free will decisions of evil spirits. Remember, Plantinga is not claiming that any of this is true. He's just giving a model of S, i.e. a scenario that, if true, would entail the truth of the two sentences in S, namely (1) that evil exists, and (2) that God is all knowing, all powerful, and wholly good.

Remember, the model of S is that God created a world containing evil and had a good reason to do so. The good reason is that there are no feasible worlds containing free creatures that do not sin.

But, you might say, God didn't have to create a world containing free creatures. So even if we grant that there are no feasible worlds containing free creatures that do not contain evil, there still might be worlds without free creatures that do not contain evil. The question, then, would be whether those worlds are better or not.

Those who subscribe to libertarian free will cite multiple reasons for why a world containing free creatures is better than a world without free creatures, even if it means there will be moral evil. Here's a few of them:

1. Libertarian free will is necessary for moral good or evil, so if there were no free creatures, you might be able to eliminate moral evil, but at the same time, you'd be eliminating moral good. As long as the moral good that results from libertarian freedom outweighs the moral bad, a world containing free creatures is better than a world without, even if a world containing free creatures also contains evil and a world without free creatures doesn't.

2. Libertarian freedom is necessary for life to have any meaning. If we are mere puppets on strings and don't make any choices, there's no reason for us to even be sentient. We might as well be philosophical zombies.

3. Libertarian freedom is necessary for love. Love isn't genuine if it's pre-programmed, hard-wired, or causally determined. It's only genuine if people freely love each other.

4. Libertarian freedom is necessary for reasoning. If everything you believe is just the end result of a blind mechanistic series of physical cause and effect, then there's no sense in which those beliefs could be the result of affirming a truth for good reasons. Reasons are irrelevant because, given a set of initial conditions, plus the laws of nature, your current beliefs would be determined to emerge whether there were good reasons for them or not. If you happen to deny free will, that's just because you are being caused by how the chemistry in your brain happens to be fizzing at the moment to deny free will, and the fizzing of your brain is just part of a long causal chain that stretches indefinitely into the past and into the future.

Now we come to the question of whether a compatibilist can use Plantinga's free will defense. After all, compatibilists do not subscribe to libertarian free will. It seems to me there are two things for a compatibilist to consider: (1) Is libertarian freedom even possible, and (2) Is a world with libertarian freedom better than a world without?

Since Plantinga's free will defense isn't offering the free will scenario as the actual answer to why there is evil in the world, and is only offering it as a possibility that, if true, would render S logically consistent, it would appear that a compatibilist need not affirm libertarian free will in order to use Plantinga's argument. Even though a compatibilist may think the free will scenario is false, as long as they grant it as a possible state of affairs, they should be able to use it. If they use it, they are only offering a Model of S to show that the members of S are logically consistent.

Some compatibilists think libertarian free will is at least possible. It's something God could bring about if he wanted to. But there are some compatibilists who think libertarian free will is incoherent. It does not describe a possible state of affairs. For those compatibilists, it would be inconsistent of them to use Plantinga's free will defense. Unless libertarian free will is possible, it cannot serve as a Model of S.

For those compatibilists who think libertarian free will is at least coherent, they have to consider the additional question of whether the world would be better with or without libertarian freedom. If we look at the four reasons above for why a libertarian sees the good in libertarian free will, a compatibilist will probably disagree with all four points. Compatibliists deny libertarian free will but still affirm the reality of good and evil, that life has meaning, that people genuinely love each other, and that we are reasoning creatures capable of having justified beliefs.

Of course it's possible libertarian freedom serves some other good purpose, and a compatibilist could be open to that. It seems to me, though, that a compatibilist should be very skeptical that there is such a purpose (or at least a sufficient one) since the reality of the matter is that God chose to actualize a world without libertarian freedom. The fact that God actualized this world rather than one containing libertarian free will seems to suggest that whatever goods might accompany libertarian free will, they weren't good enough to justify God actualizing a world with libertarian freedom.

One route a compatibilist might take is to grant the epistemological possibility that they are just wrong in all their compatibilist beliefs. Maybe they're wrong to reject the four reasons for why libertarian freedom serves a good purpose. Maybe they're wrong to think libertarianism is incoherent. Maybe they're wrong to think a world without libertarian freedom is better than a world with it. I'm not sure this kind of epistemological possibility is sufficient to justify using Plantinga's free will defense, though. If libertarian free will is incoherent, but the compatibilist just doesn't know it, then libertarian freedom still can't serve as a Model of S. It is not consistent to think libertarian freedom is incoherent while, at the same time, offering it as a Model of S. I'm curious if anybody reading this disagrees with me about that.

A compatibilist may reject libertarian free will and still use Alvin Plantinga's argument. Remember, Plantinga's Model of S consisted of the statement, "God created a world containing evil and had a good reason for doing so." Libertarian free will was only offered as a hypothetical example to show that such a thing is possible. It's possible, because of libertarian freedom, that God could have a good reason for creating a world containing evil. Even though a compatibilist might reject the idea that libertarian freedom serves as a good reason for God creating a world containing evil, they could come up with some other scenario that does the same thing. And, again, the scenario need not be true. It need only be possible.

Here is one possibility.

God himself is the greatest possible good. Everything about him is good. For any good attribute God has, a world in which that attribute gets expressed is better than a world in which it doesn't get expressed. An active good is better than a dormant good. A world where all of God's attributes gets expressed is better than a world in which some of them, though good, never get expressed.

Now, consider some of the attributes God has, like mercy, a willingness to forgive, and wrath against sin. None of these attributes can be expressed if there is nothing to forgive, no occasion to show mercy, and no sin to punish. People may have a hard time wrapping their minds around the idea that punishing sin is a good thing or that it wouldn't be better if there were no sin to punish. But if you accept that everything about God is good, and that God does have wrath against sin, then you have to accept that it is good for God to express wrath against sin.

Under this hypothetical scenario, God's goodness actually entails that evil exist. Evil is necessary for God to give full expression to all his attributes. Expressing his attributes is how God glorifies himself. We glorify God by giving rise to the expression of those attributes. God is glorified in the expression of his mercy towards some, and he is also glorified in the expression of his wrath toward others. If this scenario is possible, then it's possible that God created a world containing evil and has a good reason for doing so. That, then, can be a Model of S showing that the existence of evil is compatible with the existence of an all-knowing, all powerful, and wholly good God.

Wednesday, February 05, 2025

How ChatGPT defines Christianity

If you ask a Calvinist and a Jehovah's Witness to define Christianity in a nutshell, you will probably get different, but related, answers. Jehovah's Witnesses will probably focus on the kingdom of God and Jesus' rulership over that kingdom. Calvinists will probably focus on the gospel of grace and how God saves people for his glory. I have attempted to define Christianity in terms of seven points that include both worldview issues and the gospel in a nutshell (see here for example).

I was thinking about that today and thought it might be interesting to ask what ChatGPT would say. Here's my question and ChatGPT's answer.

Sam: What is the most succinct definition of Christianity that you can give?

ChatGPT: Christianity is the faith centered on the life, teachings, death, and resurrection of Jesus Christ, believing in Him as the Son of God and Savior of humanity, offering salvation through faith, grace, and love.

What do you think? How would you have answered my question? What are the core essential defining points of Christianity? What makes Chrisitianity what it is and not something else? What's the most succinct definition of Christianity you can give?

Sunday, February 02, 2025

What if there were a message embedded in the digits of pi?

In his book, Contact, Carl Sagan imagined the idea that there could be a message embedded somewhere in the digits of pi. There are lots of ways a message could be embedded, though.

The circumference of a circle can be given by the equation,

\[ \normalsize C = 2 \pi r \]

So pi can be defined as,

\[ \normalsize \pi = \frac{C}{2 r} \]

Or, put another way, pi is the ratio between the circumference and the diameter.

Everybody seems to agree that if you put pi in its decimal form, there are infinitely many digits. I don't know how they figured that out, but since there doesn't seem to be any controvery about it, I'm going to take their word for it. I've always wondered, though, if there's a point at which you get a repeating pattern, and that it repeats from then on out. I don't know if anybody has ever ruled that out or not. I mean if it's an infinite string of numbers couldn't the repeating pattern be really long?

Consider the fraction, 1/3. That one gives 0.33333333. . . But the fraction, 2/7 gives 0.285714285714285714. . . There doesn't seem to be any obvious limit to how long a string of numbers there can be that repeat. It seems possible, then, that somewhere in that infinite expanse of digits, that eventually, you could get a string of numbers that repeat from then on out. That string of numbers could be a million digits long or even 10100 digits long. I don't know if that's ever been ruled out, but it's something I've always wondered about.

Anyway, back to the topic.

If the digits of pi are infinite, what are the chances that a message could be embedded in there somewhere?

Imagine if pi contained a string of numbers that represented:

I am the God of Abraham Isaac and Jacob

If you convert that to binary code, it would be:

01001001 00100000 01100001 01101101 00100000 01110100 01101000 01100101 00100000 01000111 01101111 01100100 00100000 01101111 01100110 00100000 01000001 01100010 01110010 01100001 01101000 01100001 01101101 00100000 01001001 01110011 01100001 01100001 01100011 00100000 01100001 01101110 01100100 00100000 01001010 01100001 01100011 01101111 01100010

So that's one way it could be embedded in pi. I mean, it seems unlikely, but given infinite digits, and assuming there's no point at which things start to repeat, it seems at least possible that a string of numbers like that could be in there somewhere.

It doesn't have to be in binary, though. It could be in base 3, 4, 5, 6, 7, 8, 9 , or 10. The letter, "I," in binary is 01101001. If you convert that to base 10, it's 105. And you could do that with each letter in the string and end up with a string of numbers in base ten that represent the statement about God. You could also do it in different languages. If you consider the 10 different bases, plus the thousands of different languages there are, there are all kinds of ways for the God statement to be embedded in pi.

I don't want to make this too complicated, so let's assume the string of letters in the statement, "I am the God of Abraham Isaac and Jacob," and forget about the spaces, capitalization, or the commas that should be in there. What are the chances that you could just randomly get, "iamthegodofabrahamisaacandjacob"?

There are 31 characters in that string of letters, and there are 26 letters in the alphabet. So each character could be any of 26 letters. The probability, then, would be 1 in 2631. That's about 1 in 7.31 x 1043. Those are slim odds.

But suppose you have 2631 chances for it to happen? Then it's not so unlikely that it could happen. If you suppose that there are 261000 chances for it to happen, it seems inevitable that it would. Since there are infinitely many digits in pi, it seems almost inevitable that somewhere in there, the statement about God would be embedded.

But given how unlikely it is, we shouldn't expect to find it within the first few digits. It may be that we'd have to look through 1044 digits before we'd be remotely likely to find it.

What if we found it a lot earlier, though? What if we found it within the first million digits? We know we haven't, but this is just a hypothetical. If that were to be the case, it would be pretty amazing, wouldn't it? But could we draw any conclusions from it? What would the significance be?

Recently, somebody on YouTube said that if there were a message embedded in pi that it would be evidence for God. That's what got me thinking about this whole subject. I had a discussion in the comment section with some people about that because I don't think it would be evidence for God. As amazing as it would be, the reason I don't think it would point to God is because pi is a necessary number. If it happens that the statement about God occured within the first million digits of pi (or anywhere in pi), then it could not have been otherwise. The ratio between the circumference and diameter of a perfect circle is fixed and unalterable by necessity. It's just as necessary as the fact that 2+2=4. There's no possible world in which things are different. God could no more make pi different than he could make 2+2=5. So God can't be credited for embedding the statement if it were in there.

There is a way, though, that a message about God in pi could point to God's existence. So far, I've been talking about pi in an idealized way. I've been talking about a perfect circle in flat Euclidean space. But suppose we were talking about pi in nature? We know, from general relativity, that spacetime has a curvature that varies from place to place. There's more curvature near massive objects than there is when you're in some void far away from any massive objects. Astrophysicists have attempted to determine whether there's any curvature on the scale of the whole observable universe, and it appears to be flat. But they can't measure the curvature of the universe with exact precision. So although it appears to be flat, there's some uncertainty involved. It could be there's some positive curvature, but the curvature is so slight that we can't detect it.

It's similar to how it's hard to tell the earth is round just by looking at your back yard. Even if you had a swimming pool, and the water was perfectly still, you couldn't measure the curvature of the surface of the water and determine that the earth is round. It would appear flat to any measuring device you used because the earth is so big. In the same way, the universe could have some positive curvature that we can't detect within the confines of the observable universe.

Suppose we could, though. Suppose the universe has some positive curvature, be it ever so slight, that would make pi, as it exists in nature, different than the idealized Euclidean pi. And suppose that pi, as it exists in nature, contained the statement about God somewhere within the first 1000 digits.

I think that would be definite proof of God. No conceivable alien could cause the curvature of spacetime to be such that it creates the God message within the first 1000 digits of pi, and given how enormously improbable such a thing would be, the best explanation would appear to be God. I think it would be downright unreasonable to deny the existence of God at that point.

If some given curvature that makes pi spell out the God message proves God's existence, then why wouldn't perfeclty flat spacetime do the same thing? After all, if we're imagining a range of possibilities for the curvature of spacetime, perfect flatness would just be one precise possibility within that range. The universe could have positive curvature, negative curvature, or perfect flatness. Perfect flatness is just as improbable as any other curvature.

It's hard to put my finger on the reason, but I just don't think finding a message in pi given perfect flatness would point to God in the same way that finding a message in pi with some curvature would. Maybe it's because flatness is idealized in some sense. I would still find it pretty amazing if the God message were embedded in the flatness version of pi. I just wouldn't consider it evidence for God.

What would you think if the standard pi contained a message about God somewhere? Would it matter where? Would it matter what the message was? Suppose there were all kinds of words and phrases found in pi, some meaningful, and some gibberish, and one of them happened to be the God message? Would that make an impression on you?

An Aside: There are web pages where you can search for things in the digits of pi. Here's one that allows you to find your birthday (or any other string of numbers) in pi. My birthday begins at the 2,462,816th digit.

I tried searching for "Jesus is God," in a few of those pi search thingies, but none of them would find it for me. I simplified it to:

jesusisgod

In binary, that's:

01101010 01100101 01110011 01110101 01110011 01101001 01110011 01100111 01101111 01100100

In base 10, that's:

106101115117115105115103111100

If anybody else is able to find it, let me know.

EDIT: I found one! This web page found my sequence in the 7.196305715199008 x 1030th position.

Let's see if that's remarkable. My sequence is 30 digits long, and there are 10 possible numbers for each digit. That means the odds are 1 in 1030 that any given sequence 30 digits long would contain my sequence. So yeah, we found it about where we'd expect given the odds. Nothing remarkable about that.

Wednesday, January 29, 2025

Lambda-CDM vs. Timescape

A long long time ago, Albert Eistein felt like he needed to have a cosmological constant in his field equations to counter the force of gravity because without it, the universe should be collapsing or expanding. When it was discovered later on (I think by Edwin Hubble) that the universe was expanding, Einstein was all like, "The cosmological constant was the biggest blunder of my career."

Once it became widely accepted that the universe was expanding, it was always assumed that the rate at which it expands should be slowing down because of gravity. The big question was whether it was slowing down fast enough to cause the universe to recollapse or if the universe would expand forever. In 1998, some astrophycisists tried to answer this question, but what they found, instead, was that the expansion of the universe was accelerating.

The accelerating expansion of the universe gave rise to the Λ-CDM model of the universe (aka Lambda-Cold Dark Matter), which has been the prevailing view for the last two decades. Lambda is the old cosmological constant that has resurfaced. Eistein's "blunder" wasn't a blunder after all. Lambda was back. Nobody knew what was causing the expanasion of the universe to accelerate, so they called it "dark energy." Lots of ideas have been suggested for what might be causing it.

Well, recently, there has been a new plot twist. Two papers were published suggesting that the accelerating expansion might just be an illusion brought on by time dilation. You see, as the universe expands, it becomes more clumpy. Galaxies merge, while voids become bigger. Since time passes more slowly near massive objects (because of the higher gravity), and time passes more quickly in the voids, this may cause it to appear that the expansion of the universe is accelerating when it really isn't.

This other model of the universe is called Timescape, and it does away with the cosmological constant because the expansion of the universe isn't really accelerating. If Timescape turns out to be right, it would undermine the Λ-CDM model. You wouldn't need a cosmological constant.

What I find interesting about this development is that the cosmological constant is often cited as the best example of fine-tuning. They say that the cosmological constant is so fine-tuned that if it were to differ by one part in 10120, then the universe would either have collapsed too quickly for life to have ever emerged, or it would have expanded so fast that stars and galaxies could not have formed, in which case, life would not have been possible. The cosmological constant has to be fine-tuned to an incredibly precise value for life to even be possible in the universe, they say.

But if Timescape is true, then there isn't a cosmological constant at all, much less a fine-tuned one. I find that very interesting.

Several YouTube science channels have commented on the two new papers. Two of the best videos I've seen on it were by Dr. Becky and PBS Spacetime. I recommend watching these videos because they explain the papers really well. They both express skepticism about the Timescape model and say the evidence still favours Λ-CDM.

I suppose only time will tell. It's like whenever somebody claims to have discovered some artifact that over-turns everything we thought we knew about the ancient world. You have to wait and see. Sometimes, it pans out, and sometimes it doesn't. Most of the time, it doesn't. But it's always very interesting when it does.

Sunday, January 26, 2025

20 year blog anniversary

It's been 20 years today since I started this blog. It has been a great outlet. I'm glad I did it. Thanks to everybody who has left comments and engaged me in conversation, especially DagoodS, Paul Scott Pruett, Psiomniac, Dale, Jeff, Steve, and all the people who participated in the Book of Mormon discussions, including Carl, Angela, Curtis, Tracy, Fern, and Dave. A huge thanks to my old friend, Angie, for allowing me to post the Conversations With Angie series. Also thanks to Tabatha for engaging with me on your Jewish view about Jesus. Thanks to Richard, Safiyyah and everybody who left one-off comments and were never seen again. Thanks to everybody I left out. It's been 20 years, so I can't very well go through and name everybody.

In the last 20 years, there's only been one year I didn't post anything - 2016.

Thursday, January 23, 2025

Does it make sense to pray for another person's salvation?

This is an issue that both Calvinists and non-Calvinist challenge each other with. The challenge is different depending on how you think God saves people.

Calvinists believe that God decides before the foundation of the world who he is going to save, and he does so for his own good pleasure. It has nothing to do with anything about the person. He doesn't look ahead and choose people based on his foreknowledge of their belief or behavior. He just makes a unilateral decision to save people. Then he disposes the world in such a way that whoever he decides to save will eventually comes to faith in Jesus.

Here's the challenge for Calvinists: If God determines who he will save and who he won't, and God always gets his way, then what's the point in praying for anybody's salvation? We can't change God's sovereign will. He decided what he was going to do before we were even born. Whether God chooses to save somebody or not, our prayer shouldn't make any difference.

My answer is basically the same as my answer to the question of why we should evangelize if God determines who will be saved. It's because God doesn't just determine the end result. He also determines how that end result will get accomplished. The reason people come to believe in Jesus is because somebody preached the gospel to them. The preaching of the gospel is part of the means through which God brings about conversion. And the fact that we evangelize is also determined by God because he's sovereign over everything that happens.

In the same way, God may determine, for his own glory, to act in answer to prayers, and he may plan to do so from the foundation of the world. The prayers themselves may be part of what God decrees to happen, and God plans before he even creates the world, that he will act in answer to those prayers. So it makes all the sense in the world to pray for people's salvation given the Calvinistic belief that God is sovereign in salvation.

Here's the challenge for non-Calvinists: If God loves all people equally, and equally wants to save them all, then we should expect that God has exhausted all effort to save them short of violating their free will. With that being the case, what is the point in praying for their salvation? God has already done all he is willing to do. If he has not done all he is willing to do, then why not if he wants to save everybody? In a syngergistic model of salvation where God does his part in offering the gift of salvation, and it remains for us to do our in accepting the gift, it doesn't seem like there would be anything left for God to do except make the person willing to accept the gospel, which God supposedly never does because that would violate our free will. If you deny irresistable grace, then God can't actually accomplish saving somebody without that person's cooperation. So God can't actually answer your prayer for somebody's salvation. He can only try.

A non-Calvinist could give a similar answer that I gave. They could say that God knows we're going to pray for somebody's salvation, so he withholds whatever effort he would otherwise have made toward getting somebody saved until we pray. Then he could be said to have exerted whatever effort toward somebody's salvation in answer to that prayer. That is a possibility, but I've never heard a non-Calvinist put it like that.

I often hear non-Calvinists say the reason God doesn't perform spectacular miracles or do whatever he can to save people is because he knows they wouldn't repent even if they saw a miracle. Only people who would repent get to see the miracles. They may be right. It seems to be implicit in this sort of statement, though, that everybody who would embrace the gospel under any circumstances, God does bring about the necessary circumstances for those people. For anybody who God knows would not embrace the gospel, there's no reason to bring about any circumstances we think might lead to their conversion. God knows it wouldn't.

On the surface, I think that's a reasonable response, but it's not without some snags. For example, Jesus said in Matthew 11:21 that if the miracles that were performed in Chorazin and Bethsaida were also performed in Tyre and Sidon, they would have repented. But the miracles were not performed, and they did not repent. This seems to cast doubt on whether God does everything that could have been done to save people.

I suppose there are a lot of responses a non-Calvinist could give to the issues raised. To a certain degree, we just have to admit we don't know why God does or doesn't do things the way we think he should or would like for him to. I don't think there's anything wrong with just saying, "I don't know," and trusting God anyway.

But I do think the problem non-Calvinists face is a little thornier than the problem Calvinists face when it comes to praying for somebody's salvation.

AN AFTERTHOUGHT: I am so disappointed! I heard a sermon a long time ago, I think by Brian Borgman, where he used Isaiah 37 as an example of where God did something in answer to prayer, yet the whole thing was planned long before the prayer was ever uttered. Hezekiah prayed that God would save Israel from Sennacherib, the king of Assyria (verses 14-20). In verse 21, God said, "Because you have prayed to me concerning Sennacherib king of Assyria. . ." then it launches into a judgment against Sennacherib in which 185,000 Assyrian soldiers die, and Sennacherib eventually dies. In verse 26, it says, "Have you not heard that I determined it long ago? I planned from days of old what now I bring to pass." The sermon I listened to said God planned the judgment against Sennacherib long ago, yet he brings it about in answer to Hezekiah's prayer. This was meant to be an example of how praying is consistent with, and included in, God's eternal decree. I was going to use this as an example until I read it more carefully.

When it says God planned the whole thing long ago, it wasn't talking about the judgment against Sennacherib and Assyria. Rather, it was talking about all the terrible things Assyria did to other nations. There is nothing in this passage that says God planned long ago to judge Assyria in answer to Hezekiah's prayer, so it's not a good proof text for that sort of thing. It is, however, a good proof text that God does decree evil, which I talked more about here.