Usually when we think of a rational belief, we think of a belief that is held for good reasons. There are two ways a belief might be held for good reason, though. One way is that some evidence or line of sound reasoning resulted in the belief. That's true of a posteriori beliefs. Another way is that a person can, by reflecting on something, grasp the necessity of it. That's true of at least some a priori beliefs.
But what about other beliefs such as the belief in the reliability of our cognitive faculties, our sensory perceptions, and our memories? There's no necessity to these beliefs. Whether there is any line of reasoning that could lead to these beliefs is questionable. If there were a line of reasoning that lead to these beliefs, it's likely that the premises would be less certain than the belief itself, which means the belief is probably not held because of that line of reasoning.
Some beliefs appear to be hard wired. They're just built in. We automatically assume that past experience can tell us something about what to expect in the future. We automatically assume that we remember things because a past really happened. Some people become philosophers and begin to question these things, but even people who question them find it hard to do so. They have to continuously resist the temptation to attribute truth to these seemingly built in notions.
So what justifies them? Since they cannot be justified on the basis of sound reasons, nor on the basis of their necessity, it seems like the only way to justify them is on the basis of their causes. If they are caused by a reliable mechanism, then they can be trusted, but if they are caused by an unreliable mechanism, then they can't be trusted.
Suppose these beliefs are caused merely by the structure and activity of the brain. We would have to trust that the brain is built in such a way as to be reliable. We can trust a calculator to give us the right answers because calculators were designed by people with accuracy in mind. With brains, they had to have come about in one of two ways--either they were engineered or they arose through purely blind and natural processes.
Suppose the brain was engineered. Before we could trust the deliverances of our brains, we'd first have to presuppose the trustworthiness of the engineer. After all, it's possible somebody could stick a probe in your brain and cause you to believe all sorts of things that aren't true. Maybe we don't have the technology yet, but it's conceivable that the technology could exist. So if we held a belief that was caused by an engineer who designed our brains in such a way as to produce that belief or he somehow installed that belief into our brains, would that belief be rational?
I don't know, but I suspect you'd need the presupposition that the engineer was honest/reliable or that the engineer designed the brain in such a way as to produce true a priori beliefs. Otherwise, the belief might not be rational.
But then how would you justify the belief that the engineer was reliable and had truth in mind when designing the brain or installing the beliefs? It seems like the only way to justify that belief is purely on pragmatic grounds. We have no choice but to trust our cognitive faculties. We couldn't rationally doubt them unless we could trust them, so to doubt them is self-refuting. If I say, "My brain is an unreliable truth-generating machine," I would have no rational basis upon which to believe that statement is true since my brain is the only thing could possibly tell me whether or not it's true.
Maybe it's rational to trust our brains merely because it's irrational not to, and it's irrational not to because any claim that our brains are unreliable will necessarily be a self-refuting claim. Of course self-refuting claims aren't necessarily false. It depends. A claim can be self-refuting in one of two ways. It can be self-refuting in such a way that if it is true, then its false, in which case it's necessarily false, or it can be self-refuting in such a way that if it's true, then it can never be justified, in which case it isn't necessarily false. The self-refuting nature of claiming that the brain is an unreliable truth-producing machine is self-refuting in the second sense, but not the first. (Maybe we should call the second sense "self-undermining" instead of "self-refuting.") So while it may be irrational to doubt the reliability of your brain, that doesn't necessarily mean your brain is reliable. It just means it's more rational to affirm the reliability of your brain (since you can do so consistently) than it is to deny the reliability of your brain (since you cannot do so consistently).
If we must trust the reliability of our brains on pain of self-refutation, then it seems like we must also believe that the brain was produced in such a way as to guaranty its reliability. So if you believe the brain was engineered, then you must believe the engineer was honest and had truth in mind when he designed the brain. Otherwise, you're being irrational.
The alternative is to believe the brain evolved naturally in such a way as to be reliable. In this case, nobody meant for the brain to hold true beliefs. It's just that holding true beliefs was conducive to survival and reproduction. I don't want to explore this right now because it would take us into Alvin Plantinga's evolutionary argument against naturalism. I'll just say that if a sound argument can be made for why we should expect the brain to be reliable on the supposition that it evolved naturally, then that would give us a rational ground for our built in beliefs.
So basically, I think our beliefs can be rational on the basis of sound reasoning from true premises, intuitively grasping the necessity of them, or having them be built into our brains under the presupposition that the brain was produced in such a way as to be a reliable belief producing machine, whether that entails presupposing a brain engineer or a truth-favoring evolutionary process (or some combination of the two).
I have a lot more to say about this, but I guess this post is long enough already.
1 comment:
Once you take the naturalistic move to talking about reliability, you have switched from a normative conversation to a descriptive conversation, where normative concepts like justification no longer apply.
Building parsimonious models of experience just is what minds do, just like blossoming is what flowers do and meowing is what cats do. But the flower itself can't "presuppose" blossoming, and my cat can't "presuppose" meowing. Just so, the reliability of my cognitive faculties does not entail the proposition "this faculty is reliable" as a premise from which other conclusions are derived via a system of epistemic norms.
Behaviors don't presuppose anything, and thinking rationally is a behavior. Consider me attempting to swim unaided across the Atlantic ocean. Maybe I believe, delusionally, I will succeed; maybe I don't believe this. But I don't have to "presuppose" anything about the probability of success to engage in the behavior of swimming. My attempt can certainly come in for criticism for being delusional or for being a generally foolish behavior, but anyone can simply attempt the swim, completely irrespective of the truth or falsity of the proposition "this attempt will succeed". The foolishness doesn't enter into the physical description, qua description.
And so we try to generate coherent world-pictures because that's just what we do. I don't know if you're a lucid dreamer, but I average about 2 or 3 of those a month. I've also utilized numerous psychedelic drugs at different times in my life and been under the lingering effects of botched general anesthesia, often with quite vivid, florid hallucinatory effects. If any epistemic project was foolish or doomed ab initio, it would be trying to build coherent models within examples like these, where my sensory and cognitive faculties are pretty much definitionally unreliable.
And yet that's what the mind just *does* whenever you turn it on. ("I forgot I registered for this class and the exam is in 30 minutes -- wait, didn't I graduate ten years ago?!?"; "We can't play the music too loud, or the mannequin men from Babylon Five will silently judge me. And they're always silently judging me.") The output may be silly, but it's still the output of a rational process, given the inputs. At any given point, the rational thing to believe is the most parsimonious model of the inputs with the greatest predictive power, because that's what constitutes the behavior of thinking rationally about the world. The process itself doesn't contain any information about the reliability of itself, and doesn't need to.
Confusion creeps in when we slip between levels of description, the model and the meta-model, the empirical and the normative. We can form a model of our thoughts within the model, then feed it counterfactual data and see how it performs. But the a posteriori judgment that our meta-model is spitting out accurate behavioral outputs is not the same thing as an a priori presupposition that the base-level model is reliable.
Note also that "reliable" is a relative term, and any theory about the causal backstory of our belief-forming systems must, must, MUST take into account our many well-known and meticulously documented perceptual illusions, cognitive biases, heuristic shortcuts, emotional blindspots etc. that make us reliably UN-reliable in depressingly predictable ways. What were the design goals of our "engineer" such that we ended up with differential performance between social and non-social contexts for the Wason Selection Task, for example? What were his/her/its constraints?
Post a Comment