Saturday, August 07, 2021

Entropy, fine-tuning, multiverses, and Boltzmann brains

I got my car inspected this morning, and as I was waiting in the waiting room and looking at the TV, I got to thinking about the pixels and how one time I had looked at my iphone screen through a pocket microscope and seen a bunch of small squares coloured red, green, and blue. I imagined that the TV screen was built the same way and how any image you might see on the TV could be formed from just those three colours.

Then it hit me how this could be used as a perfect analogy to talk about entropy, probability, and one of the first objections I had to teleological arguments.

I was first introduced to "the" teleological argument in my freshman philosophy class in the spring of 1997. At the time, I thought the teleological argument was the weakest argument for God because whereas the other arguments were deductive, the teleological argument could only give you a probability. Supposedly, life was improbable because, like a watch, it required a specific arrangement of parts in order to function. My main objection to this argument was that any arrangement of parts you could think of was equally improbable. I remember using this analogy: If you were to toss up a handful of pennies, no matter how they land, that arrangement will be extremely improbable. Yet there's nothing remarkable to be explained since they had to land some way.

The mistake in my argument was in thinking of each possible arrangement in isolation. The reality of the matter is that there are different kinds of arrangements, and what's improbable is that any given arrangement will fall within a certain kind. I don't know why this wasn't more obvious to me back then because it's so plainly obvious to me now. Anyway, let me use the TV analogy to explain myself while it's fresh on my mind.

Let's imagine the TV screen has 1080 pixels, and each pixel can be either red, green, or blue. So there's three possibilities for each pixel, and we want to know how many possible arrangements there can be on the whole screen. To find that number, you'd have to multiply 3 by itself 1080 times. So, 3 x 3 x 3 x . . . equals 31080. That's an enormous number. I'm not sure how to convert it to base 10 because it's been too long since I took a math class.

[EDIT: I figured it out. You just set 31080 equal to 10x, and solve for x. You can take the natural log of both sides, which gives you 1080*ln(3) = x*ln(10), so x = [1080*ln(3)]/ln(10). If I did that right, then x = 515.29. That means 31080 = 10515.29, which is a big ole number. Somebody correct me if I did that wrong.]

[2nd EDIT: I guess it would've been simpler if I had used log instead of ln. It's just that I remember using ln to solve these kinds of problems back in the day. Anywho, carry on.]

Statistically, any arrangement is just as improbable as any other arrangement. If you were to randomly shuffle the screen, the probability of any given arrangement is 1 in 31080. Yet it has to land on some arrangement. So why think it's any more probable to get static noise than to get a recognizable picture of my cat?

The reason is because if you compare the number of arrangements that produce a picture to the number of arrangements that produce random noise, the noise-type arrangements vastly outnumber the picture-type arrangements. So it's far more probable that you will end up with random noise than that you will arrive at a picture. Never mind a picture of my cat. It's highly improbable that you'd end up with any picture.

So I was just looking at the wrong probability. The question isn't how probable it is that you'd get one particular arrangement instead of another, but how probable it is that whatever arrangement you got, it would fall within a certain kind. In the case of living beings, the question isn't how probable it is that you'd end up with a particular arrangement of molecules that makes a human being compared to any other arrangement. The question, rather, is whether you'll end up with the kind of arrangement the operates like a mechanical machine capable of performing some function instead of random noise or a puddle of homogeneous goo. There are certain kinds of arrangements that are special in some way, whether they produce a recognizable image on a screen, information, a machine, biological life, or whatever.

This applies to teleological arguments from biological life as well as fine-tuning. Of course in the case of biological life, there is a mechanism that makes it possible to arrive at improbable arrangements through a series of small steps. A lot of people hope that something similar will come along to explain fine-tuning.

One popular attempt to answer fine-tuning is to artificially increase our probablistic resources. Using the TV analogy, you can consider each shaking or random try as a probablistic resources. If you shook things up randomly 31080 times, you'd be practically guaranteed to get one of those special arrangements that produces an image of something. The more random tries, the more probable it is to get a special arrangement. In the same way, the more universes with randomly ordered constants there are, the more probable it is that you'd get one with just the right arrangement to make chemistry, and therefore life, possible.

The problem with taking this approach is that it creates the Boltzmann brain problem. This is where the TV analogy can be used to explain entropy. The second law of thermodynamics can be thought of in statistical terms. Given a fixed amount of energy (i.e. a closed system), there are many forms that energy can take and many different arrangements it can be in. Take heat energy, for example. If you have a room with a hot end and a cold end, heat will move from hot to cold until the room reaches a state of equilibrium. Once the heat is equally distributed, the temperature will be the same everywhere. And once that happens, heat will no longer flow. Well, for energy to be used to do work, it has to be transformed from one kind to another or from one thing to another. Heat energy can be converted into mechanical energy, for example, in a heat engine. Mechanical energy can be converted to electrical energy in a generator. Electrical energy can also be converted to mechanical energy in a motor, or into heat energy on your stove.

This process is never done with 100% efficiency, though. In the case of the room whose temperature equalizes, you have just as much energy at the end of the process as you have at the beginning because it's a closed system, and the energy remains constant. But once equilibrium is reached, none of that energy is available to do work anymore. In the same way, when you convert energy from one form to another, there will be a certain fraction of that energy that is no longer available to do work. This is the problem with perpetual motion matchines. If you had a motor that turned a generator, that created electrical power to the motor to keep turning the generator, eventually, the whole system would reach a state of equilibrium. None of the energy would be available to do work, and the machine would shut down. When I was in the nuclear power school in the navy, we were taught the rule: "A heat engine must reject heat." That's because there's always going to be a certain amount of energy that can no longer be used, and if you have a closed system, eventually none of the energy will be available to do work anymore.

The amount of energy that's no longer available to do work is called entropy. Entropy can be thought of as the amount of energy unavailable to do work or as the amount of homogeneity of energy, or the spreading out of energy, or the degree of equilibriunm, or randomness, or whatever. They all amount to basically the same thing. The second law of thermodyamics says that the total entropy in a closed system (i.e. a system in which energy neither increases nor decreases, leaves or enters the system, etc.) will increase with every process. Basically, when anything at all happens in the universe, the total entropy of the universe increases. This is true even when it appears as if entropy had gone down. The reason is because if something becomes more ordered or arranged in such a way that energy can be used, the entropy will increase somewhere else. For example, when ice crystals form, it does so by giving off heat, and that heat dissipates in the universe, increasing the over all entropy of the universe.

So let's go back to the TV analogy. If you started off with a nice crisp image of a cat, and you shook it just a little, that image would be less crisp. And the more you shook it, the less you'd see an image, and the more you'd see randomness. The reason is because each time you shake it, it's going to end up in a less ordered state because disordered states are statistically far more probable than ordered states. Since the universe is in a constant state of change, the arrangement of particles is constantly changing. And since there are more disordered states than ordered state in all the possible configurations of particles in the universe, it follows that as the universe changes, its entropy increases. It would be mind-blowingly improbable for this process to ever reverse. And even if it did reverse for just a brief moment, it would immediately go back to increasing entropy.

Now let me explain how this applies to the multiverse solution to fine-tuning and how that creates the Boltzmann brain problem. Let's say that I shake the TV enough times that I create an image of a butterfly in some random place on the screen. The background remains random. Well, that is far more likely to happen than to have the entire background filled in with other butterflies and leaves, flowers, and stuff. So if we shook the TV, say 3100,000,000 times, you'd get far more screens with an image of just one butterfly and a random background than you would screens that were filled with butterflies, leaves, flowers, etc., and no random background.

Now, consider the universe. At the beginning of the universe, the total entropy was very low. It's been going up constantly for the last 13.8 billion years. The universe is approaching a state of equilibrium, and when it reaches that state, there won't be anymore life, stars, galaxies, etc. But we are far from there yet. There is still enough order in the universe to produce stars, galaxies, and biological life. But imagine if instead of an entire universe like ours filled with galaxies and containing as much life as there is on earth, you instead had just one sentient life form, and the rest of the universe was in equilibrium. Well, obviously, given the same amount of matter and energy, a universe with one life-form in a sea of equilibrium is statically far more probable than a universe full of stars, galaxies, and billions of life forms. It's just like how a screen with one butterfly in a sea of randomly ordered colours is more probable than a screen filled with images of butterflies and things. And just as shaking the screen a gazillion times would produce far more scenarios with just one butterfly and a random background than screens filled with butterflies and things, so also creating random universes would produce far more universes consisting of just one life form in a sea of thermodyamic equilibium than in universes like ours where the whole thing is full of order and consisting of billions of life forms.

What needs to be explained, though, is just the perception of a universe like ours. After all, it is from this perception that we draw all our conclusions about the universe. Presumably, if we want to assume physicalism, this perception is produced by our brains. So all you'd need to produce these images is a brain whose internal structure is exactly like the internal structure of a brain that is getting input from its sensory organs. And you really only need that brain to exist for a split second in order to explain your experience. After all, you only experience the present. It's possible you came into existence a split second ago complete with perceptions, beliefs, memories and everything, and if you really are just a brain floating in space, you're probably going to die in just a moment. The fact that you haven't already died only means you just now came into existence a split second ago.

Statistically, it is far more probable that a brain without a big well-ordered biologically complex body would spontaneously emerge in a sea of thermodynamic equilibrium than it would be that a whole universe like ours would spontaneously emerge if you were just producing random universes.

So if you appeal to a multiverse in order to explain fine-tuning, then you run up against the Boltzmann brain problem. A Boltzmann brain is an isolated brain that comes into existence complete with perceptions, beliefs, memories, etc., and it perceives a universe like ours. Since Boltzmann brains are vastly more probable than universes like ours, it follows that if we try to explain fine-tuning by appealing to the probablistic resources of a multiverse, there would be far more Boltzmann brains than there will be people living in universes like ours. With that being the case, it is far more probable that you are a Boltzmann brain than it is that you are living in a real universe that looks like ours appears to be.

You could respond to this by biting the bullet and saying, "Okay, so it's likely I'm a Boltzmann brain." After all, this argument is similar to the simulation hypothesis in the fact that we are comparing the number of sentient beings inside a simulation to the number of people in the real world. A lot of people think we are in a simulation because of this argument. If you click on the link, you'll see my reasons for thinking we are not in a simulation. One of those reasons is just an appeal to common sense realism--the view that we should take the world as it appears to be apart from any good reason to think otherwise. We have such a strong intuition that the world is real that we only pretend to take brain-in-vat type scenarios seriously. They're great thought experiments, but if we're totally honest with ourselves, hardly any of us really believe them. We have such a strong intuition that our senses are giving us true information about a real external world that we cannot bring ourselves to deny its reality even in the face of arguments (like Zeno's paradoxes) that we can't answer.

That's why Boltzmann brains are problems. If you have a model of reality that generates the Boltzmann brain problem, then that's a good reason to reject that model. You have to reject it on rational grounds because you probably don't think you're actually a Boltzmann brain. If we're going to be serious, we can't embrace models of the world that make it more likely that we are Boltzmann brains. So we have to reject any multiverse model that generates the Boltzmann brain problem.

There are multiverse models that don't generate the Boltzmann brain problem, but those models also don't answer the fine-tuninng problem. Maybe somebody will come up with a model that answers fine-tuning without generating the Boltzmann brain problem, but so far I don't know of such a model. Let me just mention a few I'm aware of.

First, there's the many worlds interpretation of quantum mechanics. This one doesn't explain fine-tuning because each branching universe has exactly the same values to its constants.

Second, there's string theory (or M-theory). This doesn't explain fine-tuning for two reasons. First, it doesn't predict a multiverse. It only makes a multiverse possible. The possibilities are limited to about 10500 because there are 10500 different possible spacial geometries, each producing a different configuration of constants. Second, if string theory did predict a multiverse, it would generate the Boltzmann brain problem because while these 10500 different kinds of universes are possible, the actual constant configuration of each universe is still random. It's random which of the 10500 possibilities any given universe will turn out to have.

Third, there's a model that I think is called the ekpyrotic model. I'm not 100% sure, though, because there's also the "eternal inflation" model, which might be the same thing, or might be a feature of the ekpyrotic model, or might be a different thing altogether. Anyway, according to this model, the whole of space is in a state of constant rapid expansion, and every now and again, some small bubble will come out of this rapid expansion and coalesce into a bubble universe. Ours is just one of them, which is why our current big bang model has an inflationary period at the beginning of it. When inflation ends in some bubble universe, it takes on a set of values for its constants. This model, too, creates the Boltzmann brain problem. It may also run up against the Bord, Guth, Vilenkin theorem, but I'm not sure.

Fourth, there's another model that's almost just like the inflationary model above except that there's no inflation. There's just an infinite sea of equilibrium that exists for infinity. Given an infinite amount of time, it becomes statistically probable that in isolated regions, you'll get a state of low entropy just by random chance, and our big bang was just one such state. This model obviously creates the Boltzmann brain problem.

There are other models. Some of them don't create isolated universes. Some of them create sequential universes, so they're not really multiverses by the usual meaning of the term. Instead, they are cyclic universes in which the same universe gets a fresh start multiple times. In Roger Penrose's cyclic model, I don't think the constants are different in each cycle, so it doesn't answer the fine-tuning problem. But if they were different each time, then it would solve fine-tuning, but it would create the Boltzmann brain problem.

I guess that's about all I had to say today.

No comments: