Sunday, July 02, 2023

The normalizability objection to fine tuning, take one

Timothy and Lydia McGrew and Eric Vestrup published a paper called "Probabilities and the Fine-Tuning Argument." They came up with an objection to the argument from fine-tuning that's based on the fact that you can't specify the probability of a finite range of values over an infinte range of possibilities. The reason is because the probability wouldn't be normalizable.

According to the principle of indifference, if you don't know what the probability distribution is over some range of values, then you assume an equal probability distribution. That is, you assign an equal probability to each possibility. For example, if you had a six sided dice, and you didn't know if it had been ground in such a way as to make it more likely to land on 2 than on 3, then you assume it has an equal chance of landing on any side. Each side would have a 1 in 6 chance of landing face up. Since each side has a 1 in 6 chance of landing face up, and there are six sides in all, then if you add the probabilities for each possible outcome, the total is 1.

1/6 * 6 = 1

If the probability distribution is not even, then whatever the probability of each side is, they should still add up to 1. The reason is because all the possibilities added together sum up to a guarantee. If you roll the dice, some side is guaranteed to face up. Otherwise, you haven't accounted for all the possibilities.

That's what it means for a probability distribution to be normalized. It means the individual probabilities of all the possibilities add up to 1 or 100%.

It is possible to normalize a probability distribution over an infinite range of possibilities, though. Consider a convergent series that sums to 1, such as this:

1/2 + 1/4 + 1/8 + 1/16 + . . . + 1/infinity = 1

So if you had a probability distrubtion over an infinite range of possibilities in which the possibilities were put in one to one correspondence with that convergent series, you could normalize that probability distribution.

If you were using the principle of indifference, though, then you couldn't noramlize the probability distribution over an infinite range of possibilities. First of all, the probability of each member would be 1/infinity, which is zero. Second of all, even if it weren't zero, but was some small finite number, the probabilities of each possibility wouldn't sum to 1. It would sum to infinity.

Another related problem is that if the range of possible values is infinite, then the probability of any finite range within the total range would be infintesimal. That would render fine-tuning meaningless because no matter how big the life permitting range of some value is, as long as it's finite, the universe would still be fine-tuned. 1/n approaches zero as n approaches infinity, but the same thing is true of 10500/n. It doesn't matter how big the life permitting range is. If the range of possible value is zero to infinity, the probability of getting something in any finite-sized life permitting range is still infintesimal. To paraphrase Syndrome, "If everything is fine-tuned, then nothing is."

Luke Barnes, an astrophysicist from Australia, published a philosophical paper responding to the normalizability objection. The paper is called "Fine-Tuning in the Context of Bayesian Theory Testing." Most of this paper is over my head, but after furrowing my eyebrows and twisting my hair around my finger, I think I have gotten a handle on one particular paragraph on the bottom of page 7 of his paper that I want to talk about today.

I'm going to use the rest mass of an electron to explain, as best I can, how we can limit the possible range of values in order to normalize the probability distribution of those values. Basically, we can limit the range by what makes sense within the theories that describe the electron.

Bear with me. There's going to be a little math. Nothing too difficult. Also, just as a disclaimer, Luke doesn't go into all this math in that paragraph. Once I thought I understood what he was saying, I went and crunched the numbers to see for myself. Physics makes more sense to me if I can see the math. This is my attempt to break it down and explain it to you in a way that's more detailed and easier to understand (I think). If there are mistakes in these details, they are mine, not Luke's.

There are two theories that come into play in this explanation. There's quantum mechanics, and there's general relativity. According to general relativity, if you condense a given amount of mass to within a certain radius, it will become a black hole. The radius at which a given mass becomes a black hole is called the Schwarzschild radius. Here is the equation for the Schwarzschild radius:

R = 2mG/c2

m = mass
G = the gravitational constant = 6.6743 x 10-3 N*m2/kg2
c = the speed of light = 299,792,458 m/s

The rest mass of an electron is 9.109×10-31 kg, which is 0.511 MeV. We can plug that into the equation to calculate the Schwartzschild radius for an electron.

R = (2 * 9.109x10-31 kg * 6.6743x10-3N*m2/kg2)/(299,792,458 m/s)2 = 1.35x10-49 meters

That's pretty small. Nobody really knows how small an electron actually is, though. There were some experiments where they bounced some electrons off of each other. They tried to figure out how big they were by looking at the scattering pattern, but it looked like they were point particles with no size at all. You'd think that if an electron were that small, it would be a black hole. If it has no size, but some finite mass, then it's density would be infinite. Zero radius is well within the Scharzschild radius. So what the what, you ask?

Well, that's where quantum theory comes into play. In quantum theory, the size of an electron is defined by it's Compton wavelength.

λ = h/mc

h = Planck's constant = 6.626x10-34 joule-seconds
m = mass
c = the speed of light = 299,792,458 m/s

Instead of running the calculation this time, let's just get the Compton wavelength off the internet. For an electron that's 2.426×10−12 m. Notice the Compton wavelength of an electron is many orders of magnitude bigger than its Schwartzschild radius. That's why the electron is not a black hole.

But suppose the electron was more massive. Well, there's a limit to how massive an electron could be before it becomes a black hole. To figure out what that limit is, let's set the Schwartzschild radius equal to 1/2 the Compton wavelength and solve for mass.

2mG/c2 = (1/2) * (h/mc)

So, m = Sqrt (hc/4G)

m = Sqrt ([6.626x10-34 J*s x 299,792,458 m/s]/[4 * 6.6743 x 10-3 Nm2/kg2]) = 2.73x10-12 kg

In case you're worried about the units, 1 Joule is 1 kg*m2/s2 and 1 Newton is 1 kg*m/s2. The units works out. Don't worry. I did this on paper first. It's that total that might be wrong in case I made a typo in my calculator.

Notice that all of this just takes quantum theory and general relativity to their logical conclusions and predicts the highest mass an electron could have before becoming its own black hole. In reality, it's hard to say what would happen if an electron were that massive. Quantum mechanics and general relativity conflict on those kinds of scales, and we need a theory of quantum gravity to know what really happens.

But what this shows, according to Luke Barnes, is that there is a finite range of values an electron can take before our theories start to break down. Beyond that range, we can't trust quantum mechanics and general relativity. If we want our theories to make sense, then we have to place a limit on the range of possible values various constants can take. In the case of the electron, we can limit the possible range from zero to 2.73x10-12 kg. Zero is a natural place to put the lower limit because negative mass doesn't make much sense. But if you don't like that, then you could put the lower limit at -2.73x10-12 kg. Either way, we'd have a finite range of possible values, and that would allow us to normalize our probability distribution.

According to Luke Barnes, what I just showed with the electron can also be done with other constants. For constants that have units, like the mass of an electron, Luke says we can use the Planck scale to define a finite range of possible values to the constants. The Planck mass is actually bigger than the mass I calculated, so either Luke is being generous, or I've made some mistake. For constants that don't have units, we can limit those ranges in other ways that I didn't go into in this blog post. He went into that in his paper, too.

There's a lot more to Luke's paper, and most of it I don't understand. What I just explained was my interpreation of the last paragraph on page 7 of his paper. If you read his paper, and you get to that paragraph, please leave a comment and tell me if you think I've misunderstood something or if I made some mistake.

6 comments:

Paul said...

Regarding this: "Another related problem is that if the range of possible values is infinite, then the probability of any finite range within the total range would be infinitesimal. That would render fine-tuning meaningless because no matter how big the life permitting range of some value is, as long as it's finite, the universe would still be fine-tuned."

I don't actually believe the range of possible values for anything is infinite, but let's say it is. Let's also say there is a finite range of values for any given force or constant to support a life-permitting universe. No matter where that is or how big a range, it is minuscule in comparison to infinity. So, the chance of our variable falling into that range is, by definition, infinitesimally small. This implies that if the parameter space for anything is infinite, then the "probability" of fine-tuning is of necessity 1. Is it fair to say one doesn't favor infinities because it forfeits the game to design advocates?

My objection to infinities is that we have no real-world experiences with the philosophical concept and they lead to paradoxes and non-intuitive conclusions. The question then becomes, "what finite boundaries are available for any given parameter?"

The materialist (who doesn't favor multiverse theory) would like to limit that range as much as possible — even to suggest that the parameters are somehow, of necessity, set at the only value permitted for them. But this doesn't escape the fine-tuning problem; it only pushes it back to the level of fundamental reality and the apparent truth that any created universe must be *this* life-permitting one. How fortuitous. The alternative seems to be that whatever the creative force of the universe was, it could have yielded at least some range of values for any given parameter. How big that range is can be debated.

You do a good job of explaining Luke's criterion for setting a meaningful boundary on electron mass. It doesn't seem to mean, though, that the mass could not theoretically be outside of this range. Perhaps it is just a matter of discarding, a priori, all possible scenarios that would have a universe full of lepton and quark black holes. It's far more interesting to ask, what if the thing was like this vs like that, as opposed to, what if the thing were black holier than a black hole.

I've wondered if we can just set conceptual boundaries based upon minimums and maximums already found in nature. I included this in my own fine-tuning blog post:

"In thinking about this in terms of our experience with the existing forces it should be noted that the difference between the weakest (gravitational constant) and strongest force (the strong nuclear) is 1043. If we were to use this kind of vast scale, then all the alleged flexibility discovered in any of the parameters makes no visible difference to any chart we might draw."

Whatever the ranges, it seems to me that even relatively small differences have been seen to be problematic. I think the very possibly of different parameters at all opens the door wide to the fine-tuning problem. I think this is one reason why multiverse theory has gained credibility, or some, like Sabine Hossenfelder, have taken an apathetic/agnostic approach to the problem.

Sam Harper said...

"I don't actually believe the range of possible values for anything is infinite. . ."

It may not be. The problem, though, is specifying how big a parameter can possibly be. While it may not be infinite, in most cases, there isn't any obvious limit to the possibilities. The possibilities seem at least potentially infinite.


You're right that Luke isn't saying it's not possible for the mass of an electron to be greater than whatever mass causes it to be a black hole. What he's saying is that if electron mass were so big that the electron was a black hole, then we could not describe the electron using our current theories. Our theories break down at that point. The electron would become smaller than it's Compton wavelength, and neither general relativity nor quantum mechanics could explain it. That's what he means when he says we can limit the range of values to whatever range makes sense within our theories. This is just a convenience to allow us to normalize our probabilities. We don't have to consider the whole infinite range of possible values if that range is infinite. We can just consider the range within which our current theories make sense and see how fined tuned things are within that range.

If we ever do have a quantum theory of gravity, though, it will be interesting to see what Luke will say at that point.

>"I've wondered if we can just set conceptual boundaries based upon minimums and maximums already found in nature."

Hmm. If you used the gravitational constant and the strong nuclear force to mark the lowest and highest extremes, then you'd basically be saying that the force of gravity and the strong force are right at their extreme. You wouldn't be able to consider what a universe would look like if gravity were weaker or the strong force were stronger. I can see how something like this would work if you were talking about forces in general, but it seems counter-intuitive to apply that to gravity and the strong nuclear force.

Sabine Hossenfelder is apathetic toward the multiverse because she thinks it's too speculative, there's no way to test it, and it's not really science. Her main objection to the fine-tuning problem is that we have no idea what the probability distribution is to the constants of nature. She thinks that to know the probability distribution, we'd have to be able to observe a lot of other universes. But since we can't, we can't know, so we can't know that our universe is fine-tuned.

Luke talked to her on Unbelievable a while back, and I think he answered her by suggesting Bayesian probability rather than finite frequents, which is what Sabine was using. Luke seems to think we ought to assume an even probability distribution, and Sabine just didn't agree.

I wrote my own response to Sabine's argument. My argument is that the probability distribution is either even or uneven. If it's uneven, and it happens to make life-permitting universes probable, then this is just the deeper laws objection. If it's even, then the universe is fined tuned. Either way, you've got a fine-tuning problem, so Sabine's argument doesn't do anything to undercut fine-tuning.

Sam Harper said...

*finite frequentism

Paul said...

Good responses, and I like your other article.

It seems to me that there is a sense in which all can agree that this universe is fortuitously configured. It's simply an academic matter of quantifying the "probability" of this or any other fortuitous arrangement. We can quibble over the range of possible values for any given parameter — some ranges being more reasonable, relevant, or coherent than others — but any limitation seems arbitrary.

My own arbitrary suggestion was the range between minimum and maximum forces. I didn't mean to say that this was a cap on the available values for these or any other parameter, but rather that we observe in nature at least a 10^43 variability that might inform our calculations. That is to say, the charts that FTA skeptics use that bound these variables at only, say, 100 times +/- are being too restrictive.

I think you're right that we can look at this from multiple perspectives and still come up with a net result of fine tuning. If the possible values are broad, then we "got very lucky." If the possible values are narrow or fixed, then how fortuitous that there must be this narrow or fixed range (the deeper law issue). Perhaps this conundrum is why some have an affinity for multiverse theory. By doing so, they seem to concede the fine-tuning premise but just suggest many rolls of the dice — the dice and the roller being left unexplained.

I appreciate Sabine's scientific objectivity in this, and her rejection of multiverse theory as empirical science. I don't think this is the last word, however, since philosophy precedes science. I can respect her if all she is saying is, "there's nothing I can add to this equation as a scientist, and I don't have the interest or expertise to engage the philosophical implications of this issue." I don't think she has grounds for saying, if she does, that the FTA has no teeth.

Sure, we only have a sample size of one for the laws of nature, but I think we can still make conclusions about what we observe in *this* sample. All our intuitions and prior experience tell us that order, complexity, information, and sentience are unusual things. We understand that there are infinitely more ways to have disorder than order. We may not know if the universe had an origin that *could* have produced disorder, but we understand that how it exists *is* ordered and the many theoretical ways it could have been disordered. Even if this universe is just a brute fact, astonishment at it is also a brute fact.

Let's say that we explore another planet and find a large mass of something made out of a new element we'll call "Unobtainium" (a theoretically stable element with 114 protons and 184 neutrons). This object is in the shape of, say, a castle or a statue of a known or unknown creature. Two reactions might be offered here that are relevant to Sabine's objection. One would be the conclusion that it was designed and created by an intelligence. The other would be to say, "We have no experience with Unobtainium. This is our only observation, and we don't know what form the element is inclined to settle into when in quantity. Consequently, it is premature to think that this thing was designed." While the second reaction is technically correct, it is still legitimate to be astonished at the idea that a mindless object should of necessity assemble itself into a castle or statue. What the actual heck!?

Paul said...

I thought of my Unobtainium example after reading this: https://www.scientificamerican.com/article/the-quest-for-superheavy-elements-and-the-island-of-stability/

Sam Harper said...

I like the unobtainium illustration. That has a lot of intuitive appeal.