Friday, May 21, 2010

Not All Things Considered

Si, abbiamo un anima. Ma e fatta di tanti piccoli robot.

Yes, we have a soul. But it's made of lots of tiny robots.

-- Giulio Giorelli, quoted in Dennett, Freedom Evolves, p. 1
 This post is part of my already-much-longer-than-I-expected-and-still-growing series on free will.


What I would like to see from these free will philosophers is some sort of model that shows how will works and why or in what sense it can be considered free. (I was hoping for this sort of model from Freedom Evolves, but was disappointed.) What follows is an off-the-top-of-my-head attempt that's just to illustrate the sort of model I have in mind.

No one has ever announced that because determinism is true thermostats do not control temperature.

-- Robert Nozick, quoted in Dennett, Elbow Room, p. 51

Let's start with a thermostat. As Nozick says, we have no problem with the idea that the thermostat controls temperature. But there is also the purely physical level of description, in which the parts of the thermostat are obeying physical laws, without any concern for what they are controlling or if they are controlling it. So already with this simple system, we can talk about it on two level, though, obviously, there is no question of any sort of free will involved.

Bumping it up a notch, let's consider the Mars rover that I discussed earlier. Let's focus on a single line of code that determines whether,  when confronted with a large rock in its path,  the rover will turn left or right. That line might look something like this (vastly oversimplified, of course):

(A)   If X, turn left, else, turn right.
 Here X is a variable that can only take on the values 0 or 1. If X is 1, the machine turns left, otherwise it turns right. X depends on some list of inputs: what the rock looks like in the video input, what the angle of tilt of the ground is on either side, whether there seems to be clearer path on one side, etc. Some considerations might favor left and some might favor right, but all of them must be boiled down and weighted so that a clear, and deterministic, decision is made.

We can, of course, talk about all this on the level of electrons, voltages, and circuits. The electrons are just following the laws of physics, without any sort of decision in mind. Yet, the physical system has been carefully set up (by the computer engineers and the programmer) so that the laws of physics result in a decision about something important to the rover's goals. Actually, it is the goals of the mission scientists in this case - but the rover may be thought of as analogous to a simple organism, "designed" and "programmed" by evolution to achieve its goals of survival and reproduction.


Now bump it up another notch. Suppose there is some sort of monitoring segment of the rover's program. Let's call it the monitor for short. It keeps track of X, as well as many other important variables in the program. After the variable X is evaluated, but before statement (A) executes, the monitor looks at the result and has the opportunity to override it. Why might it want to do this? Well, the monitor has access to a wider range of information than the inputs to X. It might be monitoring the level of charge in the batteries, the distance left to go the next goal, and so forth: the long term goals, as opposed to the local situation that is dealt with by the inputs to X. For instance, if the battery is low and there is more sun on one side of the rock than the other, then the need to maintain power might outweigh the local considerations of topography that X takes into account.

The monitor, I propose, plays a similar role to Ekstrom's evaluative faculty. It is responsible for taking into account the overall state of the organism, as well as both short-term and long-term goals. And - I suggest - it gives the organism something like free will: the ability to pause and consider alternative possibilities and their consequences before proceeding.

Why would you want such a faculty? Why not, for instance, just shovel those additional considerations into the determination of the variable X? From a programming point of view, there may be other reasons to want such a monitoring program, and it might just make more sense to include this override capability in the monitor rather than in X. Flipping the switch on our intuition - thinking about an evolutionary sequence instead of a human design - it might be that the evaluative faculty represents an evolutionary step that overlays the earlier, more mechanical, system. "If it ain't broke, don't fix it" probably goes for evolution, too: if some system is working well as it is, then, rather than tinkering with that system, it might be preferable to add a new system on top of the old one. (Of course, there is nothing more or less "preferable" to Evolution herself - she merely allows what works to succeed and what fails to fail. What I mean is that a mutation that changes the old system might be disastrous, while a mutation that adds a little bit of monitoring might enable enhanced survival without messing up the old system.)

But why not have the monitor do all the work? Why not monitor all the variables of the program, and all the conditions of the environment, and consider all the various permutations of options and outcomes? As Dennett points out in Freedom Evolves, there simply isn't enough time to consider all things. If you tried to consider all your options - cut your fingernails, cut your hair, walk the dog, eat breakfast, jump out the window, nail your hand to the table, eat the curtains, eat the dog, walk the cockroach, ... - you would never get out of bed in the morning. We are overwhelmed at every instant by input - sights, sounds, smells - one of the most important tasks is to ignore stuff: to filter out the unimportant, in order to focus on the important. The other important task is to filter the possible outputs - choose, from the infinite range of possible actions, the one to do next. A monitor that tried to take everything into account, to consider every possible course of action, would be useless. It would never get anything done, and the organism would die.

I don't know if I've managed to capture some hint of the nature of free will in this very simple model, but just for kicks let's see what it implies about Ekstrom's theory of free will. Suppose I put such an evaluative faculty, such a monitor, into my Mars rover. Would I want to make it deterministic or probabilistic? From the point of view of control, determinism is the clear winner. As the programmer or human monitor of the rover, I want to know what it's going to do in a certain situation and why it's doing it - what portion of the program sent it down that path. From the point of view of an organism - I'm not sure. It might be helpful in certain situations to have a random component to one's actions - in flight from a predator, for instance. But in considering the issue of free will, well, a random component might make my actions less predictable, but would it make them more free? Wouldn't I rather make the optimal decision based on the (necessarily limited) inputs I have available, instead of flipping a coin to determine my actions? Personally, I think I would prefer that my actions be determined by my deliberation process, not just probabilistically caused by it.

When Ekstrom faces up to what is the good of an indeterministic component to the evaluative faculty, all she can come up with is, "Well, we need it to avoid determinism, because determinism is unthinkable." But if all it does is to lose us some amount of control over our actions, maybe we don't want it after all. Maybe it's time for another look at determinism.

No comments:

Post a Comment