Friday, January 18, 2019

An argument for epiphenomenalism from materialism

I don't know if this is a sound argument or not. I came up with it just last night. I thought I'd post it here in case anybody wants to comment on it.

First, let me define "epiphenomenalism" and "materialism." Materialism is the view that only material things exist. For the purposes of this post, this means there isn't an immaterial soul spirit that haunts your brain. There's just a brain. Epiphenomenalism is the view that the brain gives rise to mental phenomena (like sensation, thought, desire, emotion, etc.), but the direction causation does not go in the other direction. In other words, the mind that emerges from the brain cannot have any causal influence over the brain. The mind, in that case, is just a passive observer. This would entail that volition is an illusion since you can't cause anything to happen in your body by desiring or willing it to happen.

This guy I was talking to last night is a software developer. He used a computer as an analogy for how he understands the relationship between humans and minds. He said you can talk about a computer with various levels of abstraction. On the bottom level, you've just got electrons and atoms in motion obeying the laws of physics. A layer up, you've got logic gates that either allow or disallow electricity to flow. Close to that level (and maybe on the same level), you've got 1's and 0's. Then you've got machine code, then software code, and up and up the layers go until you have the semantic meaning. In the same way, you can describe the brain at the level of atoms or at the level of mind, but the mind is basically the same thing. It's just abstracted on a higher level.

I questioned him on whether there was causal interaction between these different levels of abstraction. My next question was going to be whether he thought the direction of causation went both ways or only one way. He didn't seem to think there was causation in either direction because these are just different levels of abstraction. They're actually the same thing, so causal interaction doesn't come into play.

So I switched gears, and this is what made me think of the argument. Even if there's no chain of causation the way we usually think about it, there is at least some sort of logical connection. The activity of the individual atoms determines, in some way, the function of the software. Whether it does that by the mechanism of causation doesn't matter for the purpose of my point. What I wanted to know was whether the activity of the individual atoms was determined by the function of the software in the same way that the function of the software determines the position of the atoms. I didn't get a clear answer, but let me explain the argument I came up with.

It seems clear that if a certain arrangement or activity of subatomic particles produces a certain outcome on the macro-level, then whenever you repeat those exact same conditions on the micro-level, you will get the exact same conditions on the macro-level. You can't not get the same results if the underlying physical structure and activity is exactly the same.

But it doesn't work the other way around. Let's say you have two different computer programers write code for a procedure that takes some inputs, performs some function, and gives you an output. The two programmers could write different code to accomplish the same thing. If you were looking at the computer screen, you wouldn't be able to tell the difference. You'd see the same prompt for your input, and you'd get the same output. There'd be no way to tell that the underlying code was different.

What all this means is that in cases of properties that emerge in a macroscopic way from underlying microscopic conditions, the stuff at the bottom level determines the stuff at the top level, but the stuff at the top level does not determine the stuff at the bottom level. So if the mind is an emergent property of the brain, then the brain activity would determine the content of the mind, but nothing about the mind would have any influence on the brain activity. The direction of "causation" only goes in one direction.

One weakness to this argument is that you couldn't have just any code perform the same function. There are limits. So you might say that the top layer of abstractions puts some constrains on the lower levels, in which case it would appear to have some determining influence.

What are your thoughts?

No comments: