Sunday, December 22, 2013

Physics Lies!

I've been bashing the theists for a while, so I'm going to take a break and bash an atheist for a change.

Nancy Cartwright is the author of How the Laws of Physics Lie, and is apparently an influential philosopher. Kitcher mentions her in his Mind and Cosmos review, she is regularly included in anthologies of important papers in the philosophy of science, and her term "Dappled World" (borrowed from Manley Hopkins, according to Kitcher) seems to have become a sort of rallying point for modern philosophical views of science.

I think her work is shoddy and unconvincing.

OK, choosing that title for her book is like waving a cape in front of a bull, I suppose. But I really tried to give her a fair hearing, honest I did. Readers of this blog can, I suppose, judge how good I am at giving a fair hearing to views I disagree with. But let me try to present her argument before I tear it apart.

The Laws of Physics Lie

In Essay 3 of the book, Cartwright lays out the case that the laws of physics are not true, or, to the extent they are true, they are not interesting. In fact, they are not even approximately true. Why?

She illustrates with Newton's law of universal gravitation. For the definition of this law she quotes Feynman:

(NG1) The Law of Gravitation is that two bodies exert a force between each other which varies inversely as the square of the distance between them, and varies directly as the product of their masses.
Then she asks, and answers:

Does this law truly describe how bodies behave?

Assuredly not.
Why not? Well, she says, two bodies may have electric charges, and so the force exerted by one on the other is given neither by NG1 nor by Coulomb's law of electric force, but  by a combination of the two. Therefore

These two laws are not true: worse, they are not even approximately true.

For instance, in an atom, the law of gravitation is swamped by the Coulomb force and so the former is not even approximately true.

Notice that she is not complaining about extreme cases where Newtonian gravitation must be replaced by General Relativity. Her complaint is that the law doesn't state a fact: except, perhaps, in a universe completely empty except for two uncharged objects.

She considers an alternate version of NG1 that uses a prefatory clause to correct the deficiency:

(NG2) If there are no forces other than gravitational forces at work, then two bodies exert a force between each other which varies inversely as the square of the distance between them, and varies directly as the product of their masses.

This, she allows, may be a true law, but it is not a very interesting one, for in reality objects have both kinds of properties (mass and electric charge) and so NG2 has no (or very few) applications in reality.

From a physicist's point of view, this is all very wrong-headed. First of all, since she is talking about combining  different influences, the law she ought to be talking about is Newton's second law of motion:

(NA) The acceleration of an object is proportional to the net force and inversely proportional to its mass.
Somehow, she completely avoids mentioning this law anywhere in the chapter (though she mentions it obliquely in considering a related objection, as we will see below). Since she doesn't mention NA, I don't know whether she considers this one of laws that is not even approximately true. But you can't talk about gravitational and electric forces combining without it. With this framework in mind, there is an obvious alternative to NG1 and NG2:

(NG3) For two massive bodies, there is a contribution to the net force that varies inversely as the square of the distance between them, and varies directly as the product of their masses.

With this change, I submit, we have a law that is at least approximately true, with no further need of ceteris paribus clauses.

This is such a simple solution to Cartwright's difficulty that it's hard to believe she missed it. However, she does go on to discuss the law of vector composition of forces, and then to consider a suggestion of Lewis Creary that is similar to NG3. Let's consider what she has to say about these issues.

Vector Addition

Cartwright admits that physicists have an answer to the question of combining forces: she calls it the "vector addition story."

The vector addition story is, I admit, a nice one. But it is just a metaphor. We add forces... when we do calculations. Nature does not 'add' forces.For the component forces are not there, in any but a metaphorical sense, to be added....
For Cartwright, the individual component forces are not real. Only the net force is real.

But this is quite obviously false. Consider, for example, a spring that is subject to equal and opposite forces on its two ends:

The net force is zero, so the center of mass of the spring doesn't accelerate. But the spring is compressed - the component forces have a real, physical effect.

Try telling this guy that the component forces aren't real:

Cartwright is essentially saying "'two apples plus one apple equals three apples' can't be true, because if the two apples and the one apple are real, and the three apples are real, then I would have six apples, not three." But this is just silly: two apples plus one apple equals three apples because that's simply what addition means, when applied to apples. Similarly, in the vector addition of forces, two real, occurrent forces can be added to make a real net force, because that's simply what it means to combine two forces.

For this, Cartwright deserves an award for Worst Misuse of Mathematics By a Professional Philosopher Not Named Craig.

Causal Action

Cartwright then turns to Creary, who claims that there are two types of physical laws: laws of causal influence and laws of causal action. Though she doesn't say so, in Newton's mechanics, the law of causal action is good old F = ma (NA).

On Creary's account, Coulomb's law and the law of gravity come out true because they correctly describe what influences are produced.... The vector addition law then combines the separate influences to predict what motions will occur. 
 This seems to me to be a plausible account of how a lot of causal explanation is structured. But as a defence of the truth of fundamental laws, it has two important drawbacks. First, in many cases there are no general laws of interaction... In fact, classical mechanics may well be the only discipline where a general law of action is always available.

Apparently Cartwright doesn't know about quantum mechanics, where Schroedinger's equation gives a general law, or quantum field theory, where the Lagrangian path integral does the same.

Anyway, it makes no sense to claim that there are no true fundamental laws of physics, except for those few cases where the laws are both fundamental and true. Naturally, if there are general laws of action, and if physics is a more or less unified subject, then we would expect the general laws to be few. The fact that there are only a few such fundamental laws actually shows the strength of the reductionist thesis rather than the opposite.

On the other hand, if her point is just that most of the time, working physicists are not dealing with the fundamental laws, but with some approximations to them, or phenomenological laws that are not fundamental, then I would agree with her, but find the point trite and uninteresting. These physicists wouldn't ever claim that their approximations are true in all cases.

Actually, Cartwright does know about quantum mechanics, because in the very next section she discusses a quantum example: the spectrum of a carbon atom. What she says here is so pathetic that I can't bear to grind through it point by point. Essentially, she again ignores the general law of action (Schroedinger's equation) in order to make her point that there is no general law of action.

At first I thought Cartwright had simply not bothered to understand the physics before she began drawing philosophical conclusions. But later in the book she shows a detailed understanding of some much more difficult areas of physics, so that doesn't seem to be the problem.

The only other explanation I can think of for these egregious errors is that Cartwright is working with a pre-conceived agenda, such that all the examples are contorted so as to support her thesis.

This is no way to ground a philosophical world view.

Thursday, December 19, 2013

A Natural Reduction

One of the great advantages of the naturalist world view is how it all hangs together. The workings of the mind can be explained in terms of the workings of the brain, which can be explained in terms of the workings of the brain cells, which can be explained in terms of the electrical and chemical properties of molecules, which can be explained in terms of the physical properties of the particles of which those molecules are composed. And the same goes for anything else in the universe: stars, moons, clovers....

Now, I will admit that some of the links in that proposed chain of explanation are not as strong as others: the mind-brain link, for example. But the overall scheme seems sound, and the naturalistic world view can count innumerable successes as evidence of its truth: the technologies of transportation, agriculture, communication, medicine, and psychiatry, to mention just a few. Put this against the abject failure of alternative ways of thinking: what did Christianity (just to pick on one alternative) accomplish in the 1500 years before scientific thinking arose?

At any rate, even when precise explanations are lacking, there doesn't seem to be any strong argument why the gaps cannot be filled out in a naturalistic way.

It seems, though, that many atheist philosophers are no longer satisfied with this reductionistic picture.  The recent book Mind and Cosmos, by atheist philosopher Thomas Nagel, argues that there are aspects of subjective experience that can't be explained by reductionistic means. (Short version here.) Another atheist philosopher, Philip Kitcher, whom I have the greatest respect for, disagrees with Nagel but seems to agree that the reductionist program has not accomplished the task it set out to do. "Unity fails at both ends," writes Kitcher.

For once, I agree with Professor Feser: if naturalism fails to give us a unified picture of everything, then it is time to abandon naturalism and seek a different explanation, rather than to cling to a failed program.

But I am not as pessimistic as these philosophers. Perhaps that's merely ignorance on my part. But some of the anti-reductionist arguments I've come across seem just silly, and, as I already said, there don't seem to be any good arguments for the impossibility of naturalistic explanation. I remain a hard-core reductionist, and I'm going to try to defend that view.

Next: Do the laws of physics lie?

Tuesday, November 26, 2013

Determined by What?

Kripke is central to Ross's argument, and it is certainly true that both Ross and Kripke take his point to be a metaphysical (not just an epistemological) one, so it is fair of Professor Feser to require a more detailed argument that the one I gave in my first post on Ross. I still think that what I wrote there was basically correct, and that Feser has not adequately countered my objection. But let me try to say it again, more clearly and (I hope) convincingly.

I was helped immensely in my understanding of the structure of Kripke's argument by a critical response to Kripke by Scott Soames. In an intricate bit of philosophical analysis, Soames shows that Kripke is equivocating between two different meanings of "determine." I think Ross is making a similar, but more basic, mistake, as I will explain.

What does it mean for a set of facts (F) to determine another set of facts (G)? This is the fundamental issue of determinacy. In order to be clear about Ross's argument, we need to know what he thinks F is, what he thinks G is, and what he means by "determine." For Soames, the problem lies in the last of these. For Ross, it lies in the other two.

The first thing we have to get clear is that Ross is not talking about the indeterminateness of meaning, Feser's claims notwithstanding. If he were, he would have to discuss the meaning of "meaning", as Kripke (?) and Soames do. Also, because Kripke's argument leads to skepticism about whether humans ever mean anything, as well as machines, Ross would owe us an account of how he can avoid the skeptical conclusion about humans while affirming skepticism about machines. But he does none of this. Indeed, as one commenter noted at the beginning of the discussion, Ross never mentions "meaning" in the article. Furthermore, neither Ross nor the naturalist thinks an adding machine means anything when it performs an operation, so if meaning were the issue, the entire discussion about the adding machine would be beside the point.

So the short reply to Ross' use of Kripke is that Ross has divorced the quaddition argument from the Kripkean context. The result is that quaddition becomes simply another version of the problem of limited data. And we have already seen that the problem of limited data helps Ross not at all. If you are convinced of this point, you can skip the rest of this too-long post. What follows simply expands and explains this point.

Ross never discusses meaning - his discussion is entirely about whether the machine is executing a function, and whether the machine's future outputs are determinate. Let's look again at the way he begins his argument:

Whatever the discriminable features of a physical process may be, there will always be a pair of incompatible predicates, each as empirically adequate as the other, to name a function the exhibited data or process "satisfies." That condition holds for any finite actual "outputs," no matter how many. That is a feature of physical process itself, of change. There is nothing about a physical process, or any repetitions of it, to block it from being a case of incompossible* forms ("functions"), if it could be a case of any pure form at all. That is because the differentiating point, the point where the behavioral outputs diverge to manifest different functions, can lie beyond the actual, even if the actual should be infinite; e.g., it could lie in what the thing would have done, had things been otherwise in certain ways. For instance, if the function is x(*)y = (x + y, if y < 1040 years, = x + y + 1, otherwise), the differentiating output would lie beyond the conjectured life of the universe.
 And later on:

Secondly, opposed functions that are infinite (that is, are a "conversion" of an infinity of inputs into an infinity of outputs) can have finite sequences, as large as you like, of coincident outputs; they can even have subsequences that are infinitely long and not different (e.g., functions that operate "the same" on even numbers but differently on odd numbers). So for a machine process to be fully determinate, every output for a function would have to occur. For an infinite function, that is impossible. The machine cannot physically do everything it actually does and also do everything it might have done.
And from Thought and World

If the machine is not really adding in the single case, no matter how many acutal outputs seem "right," there might eventually be nonsums.

[Emphasis added]

I interpret Ross to be saying that what is not determined - his G - is what function the system is computing. Further, on the basis of the preceding quotes, I take it that he cashes out this G in terms of 
   a) what the system might output at a future time, and
   b)  what the system would have output for inputs it might have had, but did not have.

Let's consider a system with two inputs, x and y. Then the question is, "Is the output z determined for every possible x and y?"

Now we come to central question: determined by what? What is Ross's set F - the facts that (fail to) determine the output, z?

Since we are considering a purely physical system, a prime candidate for F would be the set

   F1: All the physical facts about the system.

In this case, the question being asked becomes "Is the output z determined by the physical facts about the system, for all possible inputs x and y?" But this is nothing more nor less than the question of physical determinism. In a deterministic world, the set F1 certainly does determine the possible outputs, z, even for cases that the machine hasn't actually computed. Setting aside issues of quantum indeterminacy (which Ross never mentions), it seems that all outputs are determined by F1.

But F1 is not what Ross has in mind. He never attempts any discussion of physical determinism. Instead, he seems to have in mind something like

   F2: The physical facts that are known about the system at some time T.

I take this from his talk of the "discriminable features of a physical process" in the first quote above and from his talk of "empirical adequacy," though I have to say that Ross is extremely vague about this.

If this  is Ross's argument - if he is saying "What the system might have done, or will do, is not determined by the physical facts that we know about the system" - then we should simply reply, "So what?" As we saw already with the problem of limited data, there is no way to argue from an epistemological lack to a metaphysical conclusion.

There is another possibility: suppose that, instead of the physical facts that are known about the system, what Ross really means is

   F3: All the physical facts that can be known about the system.

But now we have to be careful. What does is mean to say something "can be known"? Does this mean all the physical facts that can in principle be known about the system? Then F3 is the same as F1 - all the physical facts about the system can in principle be known (barring quantum uncertainty). In that case, there is no reason to think the outputs are undetermined. However, in actual fact we can never know all the physical facts about the system, no matter what set of observations we make. Thus, any given set of observations, no matter how detailed, is consistent with incompatible functions. Does this  justify Ross's conclusion? No, because in that case, there is again only an epistemological lack.

Let me explain the last remark using the example of a computer. The computer seems at first to be a simple counterexample to Ross's claim that the outputs z are not determined by the physical features of the system. For I can look at the program the computer is running (it is encoded physically somewhere in the computer's memory) and see  what the function is: I can deduce, for example, what the output would have been for some inputs x and y that the computer has not actually calculated. But (as Feser points out) a wire might burn out or a transistor go bad inside the computer, so that the actual output is not what the computer program would lead us to believe. This is true, but it doesn't really answer the objection. For suppose I insist on a more detailed physical description of the computer: one so detailed that the failure of the wire/transistor is predictable by this description and so is accounted for. Then we see that the indeterminacy was only apparent: the output is in fact determined by this more detailed set of facts. (In this case the computer would be executing something like quaddition, rather than addition.) If we dig deep enough, we will always find some set of physical facts that do determine the output.

We can now see why Ross is wrong to say that "for a machine process to be fully determinate, every output for a function would have to occur." A sufficiently detailed set of physical facts about the system determines not only what outputs will occur at a future time, but also what outputs would have occurred for other inputs that were not actually submitted to the machine.

I have gone into this at length because I think it shows both why Ross's argument is so appealing and why it is wrong. For any given set of physical facts about the system, there are infinitely many inequivalent functions compatible with the behavior of the system. But the set of possible observations is not fixed: if we find a particular set of physical facts leaves the outcome undetermined, we can always ask for a more detailed description of the system. For a sufficiently detailed set of physical facts, we find the outcome is determined by those facts. What makes the argument seem reasonable is the slide from a given set of facts, to any possible set of facts, to all physical facts.

To summarize:

  • Ross's argument is not about  whether meanings are determined by the physical facts about the system, but whether a functional form is determined.
  • Whether a functional form is determined is cashed out in terms of whether future or counterfactual outputs are determined.
  • Ross is unclear about what F it is that fails to determine the functional form.
    • If we take F to be the set of all physical facts about the system, then all outputs are determined, and Ross's argument fails.
    • If we take F to be the set of known physical facts, then outputs are indeed undetermined, but this is only an epistemological issue. For a sufficiently detailed set of physical facts, the output is determined.
  • Thus, all three of Ross's main arguments: quaddition, grue, and the problem of limited data, only point to a lack of knowledge about the system.
  • These epistemological concerns are not enough to draw the conclusion that the system is "physically and logically" indeterminate.

None of this addresses Professor Feser's point about the meaning of a physical process, which I will have to address (I hope!) another time.

* I take Ross to mean "inequivalent" rather than "incompossible" here and throughout - see Richard's remarks in the previous post.

Saturday, November 23, 2013

Guest Post: Purity of Form and Function

While I'm gearing up for my assault on Mount Kripke, here's a guest post from Richard Wein.

Hi everyone. Robert's invited me to make a guest post on the subject of James Ross's paper "Immaterial Aspects of Thought". The resulting post is rather long, partly because there's a lot of linguistic confusion to be cleared up. I hope I can dispel a little of that confusion.

At the core of Ross's argument is his insistence that our logical thinking must involve "pure forms". He then argues that physical processes can't have such forms, and so logical thinking must involve more than just physical processes. I see no good reason to accept that we need any such forms.

Ross's concept of "pure forms" is hardly explained, and remains mysterious to me. He says, for example, that squaring involves thinking in the form "N X N = N^2". He doesn't seem to mean that we must think such words to ourselves. He seems to have in mind some unseen form, possibly Platonistic. In Section III, he talks briefly about "Platonistic definitions", and perhaps this example is one such.

It may help here if I briefly give my own physicalist view, so that I can consider Ross's in contrast to it. I say that the only verbal forms that exist are the sorts that we observe, such as those in writing, speech and conscious thought. These observed verbal forms are produced by non-conscious, non-verbal physical cognitive processes. Of course there's a lot more to be said about how this happens, and particularly about consciousness, but these are not issues that Ross raises. He is not, for example, making an argument from consciousness. Nor is he making an inference to the best explanation, where we must consider the relative merits of his explanation and a physicalist one. He is making a purely eliminative argument, and so the onus is on him to eliminate physicalist alternatives, not on me to elaborate on them.

The claim that our thinking must take the form of definitions in this sense seems to lead to a problem of infinite regress. If squaring is defined in terms of a more basic operation, multiplying, then how is multiplying defined? And so on. But I won't dwell on this point, because Ross's "pure forms" are so mysterious that I doubt I could make any specific positive criticism stick. My point is that we just don't need anything of the sort Ross is insisting on. He gives us no reason to think that the sorts of physical processes I've mentioned are incapable of producing everything that we actually observe. I don't think he even tries to show that. His only response to views like mine seems to be that, on such views, the actual processes we currently call "squaring" wouldn't be "real" squaring, but only "simulated" squaring.

This response is a confused use of language, mistaking an empty verbal distinction for a substantive one. First, regardless of whether we call such operations "real" or "simulated", if they're sufficient to deliver everything we actually observe--and Ross doesn't seem to argue the contrary--then there's no reason to think we need anything more. That in itself should suggest some confusion on Ross's part.

The distinction Ross was originally making was between processes that involve "pure forms" and those that don't. If the distinction he's now making between "real" and "simulated" processes is just a translation of the original distinction into different words, then the translation achieves nothing. He's just re-asserting his unsupported claim that we need such pure forms, but doing it in confusing new words. If, on the other hand, the new distinction were genuinely different from the original one, Ross would actually have to demonstrate that denying pure forms entails denying real squaring. He would have to make a substantive argument, and he wouldn't be able to do that without clarifying the meaning of his new distinction. In fact he makes no such argument (or clarification). He simply puts the words "we only simulate" into the mouth of the denier, as if it's indisputable that denying pure forms entails denying real squaring. So it's pretty clear that this is just a confusing terminological switch, masquerading as a substantive argument. The appearance of having achieved something arises through conflation of a weaker sense of the words (in which they are just a translation of the original claim) with a stronger sense (which has the appearance of a more irresistible claim). To accept Ross's conclusion on this basis is to commit a fallacy of equivocation.

Unlike Ross's denier, I don't say that we don't "really" square. Neither do I say that we do "really" square. The word "really" is misleading here. If denying that we "really" square is to be taken as just another way of saying that we don't think in "pure forms", then I prefer to say--more directly--that we don't think in pure forms.

There ends my main response to Ross's argument. But I'd likely briefly to address some other aspects of his paper which are liable to cause confusion.

Ross uses the term "pure forms" interchangeably with "pure functions", and I'm afraid this translation may have led to a conflation of these concepts of his own with the concept of a mathematical function in the ordinary sense of that term. Mathematical functions are purely abstract, and don't exist in anything like the sense that physical objects do. Pairs of mathematical functions like addition and quaddition are correctly called "non-equivalent". To call them "incompossible" would be a kind of category error, mistakenly implying that it makes any sense to ask whether they can co-exist. Ross's talk of incompossible pure forms/functions is further support for the conclusion that he sees these as having a more real sort of existence than do mathematical functions (in the ordinary sense).

I have no idea what it could mean for the process of squaring to take the form of a mathematical function. The messy, fallible real-world processes that we call "squaring" are quite a different thing from the abstract function that mathematicians call "f(x)=x^2". Talk of a process taking the form of a mathematical function seems to me like a category error, an attempt to transfer properties inappropriately between pure abstractions and real processes. Of course, during the process of performing an arithmetic operation, some definition of a function (some form of words) might be produced, e.g. in conscious thought. But that's a production of the process, and not the process itself or the form of the process. Moreover, simple arithmetic doesn't always involve giving ourselves any definitions, rules or instructions for how to proceed. The answers can come to mind (or speech) as the result of non-verbal non-conscious processes, without any verbal reasoning. That's why there's no infinite regress of definitions, rules or instructions.

You may have noticed that I haven't mentioned determinacy. Ross's argument is primarily made in terms of pure forms. But at times he translates into the language of determinacy, and his summary argument is expressed in such language. This translation into the language of determinacy serves no useful purpose, but creates further opportunities for confusion, because Ross's "indeterminacy" is easily conflated with other senses of the word, and Ross himself encourages such conflation by appealing to work on other sorts of indeterminacy (and even "underdetermination") which have little to do with his argument.

Since I don't think we need any "realization" of "pure forms", and I would question whether the the concept is even coherent, there's no point in my addressing Section II of the paper in detail. But I think it would be useful briefly to give a clearer account of the addition/quaddition scenario and "indeterminacy". Ross employs a variant of Kripke's quaddition function, where the differentiating point (instead of 57) is set to a number of years greater than the lifetime of the universe. That example seems peculiar to me, as I can see no reason why a system can't calculate a number of years greater than that lifetime. So I'll take a slightly different example of my own. Let the differentiating point be a number greater than any that can be represented by a given calculator. Then there's a sense in which the calculator equally well "realizes" addition and quaddition. That sense is that the calculator gives the answers for quaddition as well as it gives them for addition. As long as we don't confuse ourselves with talk of "pure forms", there's nothing remarkable about this. Ross wants to say that the calculator can't realize two different functions, so it must realize neither. But in the sense of "realize" that I've just used, there's no problem with saying that it realizes both, and the fact that it realizes both has no substantive significance.

Given that we're not talking about epistemological or quantum indeterminacy, any indeterminacy lies just in the fact that often our categories are not sufficiently well-defined for us to be able to assign a given state of affairs to a single category. For example, for some people there is no fact of the matter as to whether they are best described as children or adults. That's just a limitation of language. Indeterminacy is significant for our understanding of how language works, and for making sure we use language in ways that don't cause confusion, but it doesn't have any substantive, non-linguistic significance. Some people are reading far too much into indeterminacy.

Tuesday, November 19, 2013

Grue Some More

The second point Ross brings up in support of the indeterminacy of the physical is Goodman's Grue Argument. This one is easily dealt with.

Goodman defines something as "grue" if it is first observed before Jan1, 2025 (say) and is green, or is first observed after Jan 1, 2025 and is blue. He uses this to make a point about induction: any evidence we cite as evidence for the proposition "all emeralds are green" is also evidence for the proposition "all emeralds are grue." Thus, the grue problem casts doubt on the rationality of inductive conclusions.

So we see that Goodman's point was about induction, not indeterminacy. But this is really unimportant for Ross's argument, because Ross doesn't actually use the grue argument in any essential way. Rather, he either cites grue as an example of the problem of limited data, or as an analogy to Kripke's quaddition argument. For instance, Ross writes:

A decisive reason why a physical process cannot be determinate among incompossible abstract functions is "amplified grueness": a physical process, however short or long, however few or many outputs, is compatible with counterfactually opposed predicates; even the entire cosmos is. Since such predicates can name functions from "input to output" for every change, any physical process is indeterminate among opposed functions. This is like the projection of a curve from a finite sample of points: any choice has an incompatible competitor.
 But the  problem of limited data, as we have seen, is irrelevant for the indeterminacy question. So the grue point devolves onto the Kripke/quaddition point, which I will consider next.

Friday, November 15, 2013

A Pointless Point about Data Points

I've been, and continue to be, busy with my real work, so I have to apologize if these posts dribble out slowly, a bit at a time, rather than in one long, well-thought-out post the way Prof. Feser does. But maybe it will actually be an advantage to try to clear up one point at a time.

Ross gives three main arguments to support his claim B: "No physical system is determinate." These are:
  1. Kripke's addition/quaddition argument.
  2. Goodman's grue argument.
  3. The problem of limited data.
Kripke is the central point of Ross's argument, and it is the most difficult to tackle. I'm going to start at the other end, with (3).

The problem of limited data (PLD) is pretty easy to state. Suppose I have some system from which I can take data, and I am trying to determine what function the system is following in order to produce the data. If I take a limited number (say a finite number) of input-output pairs, there will be an infinite number of functions which fit the given data points. For example, suppose I have only three data points. Then there is exactly one quadratic function that will exactly fit those three points. But I could also fit the data with a cubic function, or a quartic function, or a polynomial of any higher degree, or an exponential function times an appropriate polynomial, etc.

Now, how does this help Ross establish his (B)? Actually, it doesn't help at all. The mere fact that I have a limited amount of data doesn't tell me anything about the process that is producing that data. Since the PLD applies equally to determinate and indeterminate systems, it can't. This seems completely obvious to me, but since it seems to be a point of contention, I will spell it out.

Suppose I get some set of N input-output pairs from a determinate process. (For Ross, this means having a human compute them.) Let someone else give me N input-output pairs from an indeterminate process that is simulating (in Ross's sense) the first process. (For Ross, this could be a computer.) Now, since Ross allows that the indeterminate process can simulate the determinate process very closely, the two data sets will be identical. (This is easy to see if we say the process in question is addition: the computer and the human will give the same outputs for the same inputs. Unless, of course, the human makes a mistake.)

Since the N input-output pairs are identical whether I get them from a determinate or an indeterminate process, there is obviously no way I can tell from the data which sort of process produced that data.

Now let's introduce the PLD. Clearly, it applies to both processes. So if (as Ross claims) the PLD provides support for the claim that the purely physical process is indeterminate, then it also provides support for the claim that the human-generated process is indeterminate. So the PLD strengthens Ross's case for B to the exact extent that it weakens his case for A (that humans are capable of determinate processes). In other words, it doesn't help him at all.

This is what I meant when I wrote that Ross's arguments don't go beyond epistemology. The PLD says I can have only limited knowledge about the process that produced the data. But it says nothing at all about the metaphysical properties of that process.

Tuesday, November 5, 2013

Goodbye, Hilda

I'd like to thank Prof. Feser for his continued patience in responding to my critique of Ross's argument. I've been very busy, but I finally had some time to look at his most recent response. He misconstrued the A,  B, C, of my previous post (understandably, since I hadn't spelled them out clearly), and I began a long post carefully laying out the logic of my argument and why Feser's response didn't answer it. Then I realized that it did answer it, in spite of the misunderstanding about A, B, and C. The "purely physical" assumption is indeed the critical assumption in Ross's argument that I wasn't taking into account, and it does eliminate the Hilda objection in a non-question-begging way. I apologize to Prof. Feser for the unwarranted and unnecessary snark in my last post. I am hereby giving Hilda the boot.

I hope to return to my original epistemological objection (as time permits), but I wanted to get this apology out in a timely manner.

Crow is a dish best eaten warm.

Monday, October 21, 2013

A Head Scratcher

Ed Feser has responded again, and it's a puzzler.

I will ignore the first part of his post, in which he is once again arguing against some argument that is not the argument I made.

Next, Feser points out that my objection, even if it worked against Ross, was irrelevant against Feser's own version of the argument.
For another thing, it is not just Ross’s views that are in question here, but mine.  And I can assure Oerter that what I am claiming is (2) rather than (1).  So, even if what he had to say in his latest post was relevant to the cogency of Ross’s version of the argument in question, it wouldn’t affect my own version of it.

Well, I never said I was arguing against Feser's version of the argument, I explicitly stated I was critiquing Ross's argument. And that is what I will continue to do here, though I may return to Feser's version later if I have the time and inclination.

Feser then goes on to explain why he thinks his version of the argument is actually what Ross intended anyway. Specifically, he addresses what Ross means by saying the calculator is not adding. Now, Ross makes a clear and consistent distinction in his paper between true adding, which he elaborates as carrying out the "pure function" of addition, and what the calculator does, which is only "simulating addition." This is a crucial distinction for him, because his basic claim is that humans can execute pure functions, while any purely physical system cannot.

In my posts I have consistently (I hope) been using "adding" in Ross's first sense. I didn't think it was necessary to spell this out: since I was critiquing Ross's paper, I was using Ross's terminology, except where I explicitly stated otherwise. But to be clear, I will henceforth use ETPFOA ("executing the "pure function" of addition") instead of "adding."

So when I said that Ross denied that the machine was adding, I meant it was not ETPFOA. Feser, on the other hand, wrote,

Ross is not denying, for example, that your pocket calculator is really adding rather than “quadding”....

So how does Feser respond? He quotes Ross's discussion of simulated addition, then writes:

So, Ross plainly does say that there is a sense in which the machine adds -- a sense that involves simulation, analogy, something that is “adding” in the way that what a puppet does is “walking.”  How can that be given what he says in the passage Oerter quotes?  The answer is obvious: The machine “adds” relative to the intentions of the designers and users, just as a puppet “walks” relative to the motions of the puppeteer. The puppet has no power to walk on its own and the machine has no power to do adding (as opposed to “quadding,” say) on its own.  But something from outside the system -- the puppeteer in the one case, the designers and users in the other -- are also part of the larger context, and taken together with the physical properties of the system result in “walking” or “adding” of a sort

In short, Ross says just what I said he says.

Now it is very strange for Feser, who is a professional philosopher, to sweep aside an crucial distinction like this, as if it were unimportant. It is not true that Ross says the machine can add in the ETPFOA sense that both Ross and I are using. It is true that Ross says the machine can do something like adding - but only something that has the name of adding, and gets that name by analogy to ETPFOA, not because it is actually ETPFOAing.

Moreover, I don't see anywhere Ross says that the machine "adds relative to the intentions of the designers and users," as Feser claims. And what exactly is Feser claiming here? That the machine ETPFOAs relative to the intentions of the designers? Or that it only simulates adding relative them?  OK, the machine taken together with the larger context results in addition "of a sort" - but of which sort? Again, Feser glosses over the crucial distinction.

You wouldn't think it possible, but there's actually worse to come. Quoting Feser again:

Oerter insists that I am misunderstanding Ross here.  As we will see in a moment, I am not misunderstanding him at all, but it is important to emphasize that even if I were, that would be completely irrelevant to the question of whether the argument for the immateriality of the intellect that we are debating is sound.  For one thing, and quite obviously, whether or not I have gotten Ross right on some exegetical matter is irrelevant to whether premises (A) and (B) of the argument in question are true, and whether the conclusion (C) follows from them.  So Oerter is, whether he realizes it or not, just changing the subject.  

Later on, he continues in a similar vein:
Evidently the reason Oerter thinks all this is worth spilling pixels over is that he thinks his “Hilda” example shows that Ross is being inconsistent, and he needs for me to have gotten Ross wrong in order to make his “Hilda” example work.  I have already explained, in my previous post, why Ross is not at all being inconsistent.  But even if he were, it wouldn’t matter.  The alleged inconsistency, you’ll recall, is that Ross treats Hilda as adding despite the fact that we can’t tell from her physical properties alone whether she is, whereas he does not treat the machine as adding despite the fact that we can’t tell from its physical properties alone whether it is.  Suppose he really were inconsistent in this way.  How does that show that premise (B) of his argument is false (much less that (A) is false, or that the conclusion doesn’t follow)? 

Answer: It doesn’t.  The most such an inconsistency would show is that Ross needs to clarify what is going on with Hilda that isn’t going on with the machine.  And there are several ways he can do this consistent with the argument.  First, he could say what I would say (and what, as I have shown, he does in fact say himself, despite what Oerter thinks) -- namely that the machine does add in a sense, but just not by virtue of its physical properties alone.  There is perfect consistency here -- both systems, Hilda and the machine, add (albeit in analogous senses), but neither does so in virtue of its physical properties alone.

This is just bizarre. Ed Feser, who revels in pointing out inconsistencies of the naturalists, is arguing that an inconsistency doesn't matter? Nor is this some trivial point of Rossian exegesis, as Feser implies: it's a basic contradiction in Ross's whole scheme.As I pointed out already, the distinction between ETPFOA and simulated adding is crucial to Ross's argument.

The logic of my Hilda example is straightforward. Ross says that humans can ETPFOA. Ross says that  A, B, and C entail that a computer cannot ETPFOA. I claim that A, B, and C are true for Hilda, too. So A, B, and C entail that Hilda cannot ETPFOA.

With this contradiction, the whole argument falls to pieces. Now, you can argue that I am wrong: that A, B, and C are not true of Hilda. Or you can argue that there is some D that I missed that is true of the computer but not true of Hilda. But you can't say this example is irrelevant to the soundness of Ross's argument.

Saturday, October 19, 2013

What Does Ross Say?

Well, no, I'm not making the sort of trivial, "silly" argument that Feser likes to ascribe to me. But before I can clarify this, it is necessary to clarify just what it is that Ross is saying.

Feser writes:

Part of the problem here might be that Oerter is not carefully distinguishing the following two claims:

(1) There just is no fact of the matter, period, about what function a system is computing.

(2) The physical properties of a system by themselves don’t suffice to determine what function it is computing.

Oerter sometimes writes as if what Ross is claiming is (1), but that is not correct.  Ross is not denying, for example, that your pocket calculator is really adding rather than “quadding” (to allude to Kripke’s example).  He is saying that the physical facts about the machine by themselves do not suffice to determine this.  Something more is needed (in this case, the intentions of the designers and users of the calculator). 

What exactly does Ross claim? Here is Ross from his paper:

Adding is not a sequence of outputs; it is summing; whereas if the process were
quadding, all its outputs would be quadditions, whether or not they differed in quantity from additions (before a differentiating point shows up to make the outputs diverge from sums).

For any outputs to be sums, the machine has to add. But the indeterminacy among incompossible functions is to be found in each single case, and therefore in every case. Thus, the machine never adds.

Extending the outputs, even to infinity, is unavailing. If the machine is not really adding in the single case, no matter how many actual outputs seem "right," say, for all even  numbers taken pairwise (see the qualifying comments in notes 7 and 10 about incoherent totalities), had all relevant cases been included, there would have been nonsums. Kripke drew a skeptical conclusion from such facts, that it is indeterminate which function the machine satisfies, and thus "there is no fact of the matter" as to whether it adds or not. He ought to conclude, instead, that it is not adding; that if it is indeterminate (physically and logically, not just epistemically) which function is realized among incompossible functions, none of them is. That follows from the logical requirement, for each such function, that any realization of it must be of it and not of an incompossible one. [emphasis added]
Ross is quite clear: he is not saying (2) at all. Neither is he saying (1). He is saying something stronger than either (1) or (2): the machine does not add - period. It is not that the physical properties of the system alone don't determine what function it is computing, the system isn't actually computing any function at all. "... if it is indeterminate (physically and logically, not just epistemically) which function is realized among incompossible functions, none of them is."

I just don't see how Feser can write "Ross is not denying, for example, that your pocket calculator is really adding rather than “quadding”..." for that is exactly what Ross is denying. 

It is this denial I had in mind when I said Ross couldn't apply the same reasoning to Hilda without denying that Hilda adds, too. But rather than re-visit that argument I will wait for the professor to (I hope) clarify. 

Tuesday, October 15, 2013

Feser and Ross and me

Ed Feser has responded to my complaints about Ross's argument - sort of. Once again, I am flattered that Feser thinks my amateur philosophizing worthy of his attention. I always learn a lot from our exchanges, even if I am not ultimately convinced of his point. He (correctly) diagnoses my confusion between indeterminacy of meaning and physical indeterminism. But that confusion doesn't (I think) invalidate my main point: that Ross's argument never gets him beyond epistemological indeterminacy.

Oddly, Feser doesn't specifically respond to my critcism. Instead, he refers back to his American Catholic Philosophical Quarterly article. But in that article, he doesn't specifically respond to the epistemology objection, either. Here's what he wrote:

Dillard also suggests that Kripke’s point is epistemological rather than metaphysical—that his argument shows at most only that the claim that someone is thinking in accordance with a certain function (such as addition) is underdetermined by the physical evidence, and not that the physical facts are themselves indeterminate. This is odd given that both Kripke and Ross explicitly insist that the points they are respectively making are metaphysical rather than merely epistemological. Indeed, Kripke says that “not even what an omniscient God would know . . . could establish whether I  meant plus or quus,” because for the reasons given above, everything about my past behavior, sensations, and the like is compatible (not just compatible as far as we know, but compatible full stop) with my meaning either plus or quus. Nor does Dillard say anything to show otherwise.
That is, Feser merely states that Ross says that his point is metaphysical, not epistemological. But Feser doesn't give any additional reasons for us to believe that Ross has actually established this. Well, I agree that Ross says that - but I don't think he has established it.

Here's why. Note that Ross's argument is just as valid when talking about what another person is doing when (say) adding. That is, when I am trying to determine whether Hilda is actually adding, or merely simulating adding, all I can do is investigate her physical actions and responses. If Ross's argument is correct, then from a finite amount of data such as these I cannot determine whether Hilda is adding or not. So (if Ross is right) I can never know whether another person is capable of addition.

But note that from the above it doesn't follow that Hilda is not adding. It may be that Hilda is in fact doing something perfectly determinate. I just can't know whether she is or not. So it is clear that Ross's argument doesn't get us past the epistemological.

This point ties in with my second complaint about Ross: the double standard. If I can't say for sure that another person is not adding, then by the same token I cannot say for sure that a machine is not adding.*

In  his article, Feser almost makes the same point. Kripke's original point (if I understand it correctly) was, not only can I not be sure what someone else means when they say something, I cannot even be sure what I mean when I say something. That is, even my own thoughts are indeterminate in meaning. Ross obviously doesn't want this conclusion - his own argument relies on one's own thoughts being determinate. Feser points out that (using Frege's conception of meaning) we cannot infer from the external indeterminacy that there is no internal meaning. He writes:

Frege emphasized that the sense of an expression is not a private psychological entity such as a sensation or mental image, any more than it is something material. Thus he would hardly take an argument to the effect that meaning cannot be fixed either by sensations and mental images or by bodily behavior to establish that there is no determinate meaning at all.

But establishing that there is "no determinate meaning at all" is precisely what Ross needs for his argument. So the argument fails.

* Though it is not directly relevant to the argument, I want to point out that the situation is actually worse with respect to the machine than it is with respect to another person. We can open up the machine, trace its circuits or it mechanism or whatever, and deduce what it will do for a given input. With another person, we can only investigate the physical outputs: we can't open up Hilda's brain and trace its circuitry. Well, not yet, at any rate.

Saturday, October 12, 2013

Ross's Double Standard

Another problem with Ross's argument is the double standard he employs. It's obvious that humans are not nearly as accurate as machines when it comes to computations. But Ross doesn't take this as evidence that humans are not carrying out a pure function. On the contrary, he suggests that mistakes could be evidence that the human is carrying out the function. He writes:

This is not a claim about how many states we can be in. This is a claim about the ability exercised in a single case, the ability to think in a form that is sum-giving for every sum, a definite thought form distinct from every other. When a person has acquired such an ability is not always transparent from successful answers, and it can be exhibited even by mistakes. [Emphasis added.]

But when he talks about machine addition, he counts any error, even a potential error many years in the future, as evidence that the machine doesn't truly add.

This is a blatant double standard. Logically, if a mistake is evidence that X is not performing the function, then that is true whether X is a human or a machine.

Sunday, October 6, 2013

Against Physicalism

Are humans more than just a complicated physical machine? Physicalism is the idea that the physical is "all there is" - everything that exists is either physical or is reducible to something that is physical. Thanks to Ed Feser's blog, I've come across an interesting argument from James Ross that physicalism cannot be true. Ross takes an approach that draws on Kripke, Goodman, and Quine to build a rather astonishing claim about physical systems. Ross's argument is very subtle and worth a close look. If it succeeds, it's truly an astounding accomplishment: one of the great philosophical debates of all times, solved. I don't think it succeeds, and I'm going to try to suggest why not.

Feser summarizes Ross's argument like this:

All formal thinking is determinate.
No physical process is determinate.
Thus, no formal thinking is a physical process.

Specifically, Ross refers to "pure functions" that humans can define but that cannot be implemented by any purely physical system. He gives examples like adding, squaring a number, and the modus ponens of logic.

Now, what makes Ross think that a physical system cannot add? Of course he knows that mechanical devices and computers are capable of performing sums, but he says they are only simulating addition, not truly adding. He writes:

 Whatever the discriminable features of a physical process may be, there will always be a pair of incompatible predicates, each as empirically adequate as the other, to name a function the exhibited data or process "satisfies." That condition holds for any finite actual
"outputs," no matter how many. That is a feature of physical process itself, of change. There is nothing about a physical process, or any repetitions of it, to block it from being a case of incompossible forms ("functions"), if it could be a case of any pure form at all. That is because the differentiating point, the point where the behavioral outputs diverge to manifest different functions, can lie beyond the actual, even if the actual should be infinite; e.g., it could lie in what the thing would have done, had things been otherwise in certain ways. For instance, if the function is x(*)y = (x + y, if y < 10^40 years, = x + y + 1, otherwise), the differentiating output would lie beyond the conjectured life of the universe.

Now, I can go along with Ross as far as the epistemological aspect of his conclusion: no matter how many input-output pairs we examine, we can never know what function is being computed. But Ross claims much more: he says physical systems are not just epistemologically indeterminate but "physically and logically" indeterminate, too. That is, it's not just that we can't know what function the machine is computing, but there really is no fact of the matter about what the output will be until it actually happens.

The problem is, the argument Ross gives is not up to the task of proving that claim.

First of all, what does Ross mean by "empirically adequate"? He is not using this in the sense of van Fraasen, for whom empirical adequacy means agreement, not just with all past observations, but with all possible observations. For Ross explicitly mentions a "differentiation point", possibly at some remote future time, at which the outcomes disagree. Nor does he mean "agreement with all future observations", for the same reason. So he must mean merely "agreement with all past observations."

But having two hypotheses that agree with all past observations is not enough to tell us that the physical system is actually (physically) indeterminate. It only says that our information is insufficient to distinguish between the two.

Another example Ross gives is the problem of determining a function, knowing only a finite number of data points. He (correctly) points out that there is an infinity of curves that will agree on those finite data. But this just says we don't know what the function is that produced the data. It doesn't follow that there is no such function at all. But that's what Ross needs for his conclusion that physical systems are not just epistemically indeterminate, but physically indeterminate.  

Ross's other arguments draw on Goodman's and Quine's work. These, too, also only reach as far as epistemology. Goodman's grue problem suggests that we can never know for sure whether we are inducting on the right categories. But it is a long way from that epistemological claim to the claim that there are no correct categories for induction to work on. Duhem's claim about undertermination says only that we can't know what part of whole complex of assumptions, theories, and practices is at fault when an experiment disagrees with theory. Again, this is only an epistemological claim. True, Quine tried to extend this uncertainty to the whole realm of human knowledge - but this extension hardly helps Ross's claim that humans can add, employ modus ponens, etc. Thus,none of Ross's arguments, either in the article or in his book, Thought and World, take us beyond epistemological indeterminacy.

I have to say it is exceedingly odd to see Feser defending physical indeterminism here. In our discussions of quantum mechanics and causation he argued strenuously that there is no such thing as physical indeterminism - not even in the case of quantum mechanics (where nearly all physicists accept fundamental indeterminism). So I'm wondering how Feser can square real, physical and logical indeterminism with the principle of causality.

Monday, June 17, 2013

More Detuning

If my Fine Tuning Argument for Naturalism (FTAN) is going to work, it needs to be supported with examples. To recap: the FTAN notes that, because God is (by supposition) all-powerful, there are many ways he could have created the universe other than by naturalistic methods. If we assume that the probability of any of these methods is equal, then there is a very small probability that we will discover that God has chosen a naturalistic method. (The assumption that "probability is equally spread over the various possibilities" is analogous to the assumption made in the usual Fine Tuning Argument for God (FTA for short).) If we find that observations agree with naturalistic methods, then we have a strong presumption against theism.

I already admitted that the FTAN in its original form doesn't quite work, because many instances of God intervening in a naturalistic process would be indistinguishable from a (somewhat different) naturalistic process. So what I need is to show explicit examples where a miraculous intervention would be distinguishable from a naturalistic process, and where the evidence available points to the naturalistic process. I gave a few in the earlier post, here I want to add to the list. (I won't make any attempt to estimate the degree of detuning here.)

1.) Age of the universe - If life evolved through naturalistic processes, then the universe must be old enough for evolution to have happened. If God created the universe, this need not be the case: there is no reason the universe couldn't be, say, 6000 years old. God could have placed humans, animals, plants, etc., on a ready-made earth, and human history could have proceeded in just the way it has.

In fact, we know the universe has been around for about 14 billion years - plenty of time for evolution to have happened.

2.) Age of the earth - Even if the universe is old, there is no reason the earth itself needs to be more than 6000 years old. God could have inserted an earth into a pre-existing universe, complete with animals, humans, etc.

Naturalistically, of course, the earth must be old enough for evolution to have happened. In fact, we know the earth is about 4.6 billion years old - plenty of time for evolution to have happened.

3.) The earth is dateable - Actually, if the earth was created separately by God, there is no reason for it to be any age at all. What I mean is this: The age of the earth can be determined by comparing the ratios of different isotopes in radioactive decay cascades. If God created the earth out of whole cloth, as it were, then comparing these ratios for different cascades would not lead to a sensible determination of the earth's age. Those ratios could have had any values God chose. Unless God was trying to fool us by carefully adjusting the ratios to give a particular, consistent, age for the earth, those ratios need not point to a single particular age.

In fact, we find that the isotope ratios do point to a consistent value of about 4.6 billion years. Once again, naturalism wins.

4.) Age of life on earth - Even with an old universe and an old earth, God could have simply zapped life into existence. He could have done this at any point in the evolutionary history of life, and evolution could have proceeded from that point. Or, he could have simply created all species in their present forms.

The fossil record shows that life has been on earth at least a billion years, and likely as much as 3.6 billion years. Needless to say, this is consistent with the naturalistic evolution of all current life forms.

5.) Common descent - If God created life on earth, there is no particular reason all life forms would be related to each other. He could have created each species individually (as indeed Christians thought for centuries), in which case there wouldn't be any relationship-through-descent.

Naturalistically, it need not be the case that all life is descended from a single common ancestor, either. In principle, life could have arisen on earth more than once. (Here I need to assume that life can arise naturalistically, which we do not know for sure yet.) However, given what we know about evolution, we can be sure that, in time, many different but related species will arise. Think of Darwin's Galapagos finches - many different species but all descended from a single ancestral species that somehow found its way to the islands. So, under naturalism, we expect to find many different species that are related by common descent.

In fact, we have strong reasons to believe that all life on earth is related by common descent from some original self-replicating life form. The evidence comes through the study of the anatomy of living organisms, through the fossil record that reveals the slow modification of life forms over vast periods of time, and through genetics that confirms the relationships deduced from anatomy and the fossil record. This is of course consistent with naturalism, but highly unlikely under theistic assumptions.

I should say again that I don't think this is a very good argument for naturalism. What I hope is that, by turning the fine tuning argument around, I can get you to see the problems with it. In my view the main problem is the assumption that probability is equally spread over the conceivable options. In the FTA, these options are the various conceivable values of the fundamental constants of nature. In the FTAN, the options are the various conceivable ways God could have created life on earth. In neither case do we have any good reason to think the probability is equally spread among  the conceivable options.

Can you think of other examples of ways God could have chosen to create life, but didn't?

Tuesday, May 28, 2013

What's Wrong With Fine Tuning?

I've been saying for a while that I think fine tuning arguments are bad arguments. I want to explain that.

Suppose I shuffle a deck of cards and let you draw one. It is the queen of hearts. What was the probability that you would draw that card? Think carefully before answering!

Thursday, May 2, 2013

Look out, Pixar!

When I was a boy, I was told atoms were too small to be seen. Today I can watch a movie drawn with individual atoms.

Hey, Pixar started small, too.

Tuesday, April 16, 2013

Pruss, PSR, and QM

In the discussion of the Leibnizian cosmological argument for God, Tyler mentioned a book by Alexander Pruss (of Prosblogion fame), The Principle of Sufficient Reason, A Reassessment. In particular, Tyler said he thought Pruss had answered all the possible quantum mechanical objections to the Principle of Sufficient Reason (PSR). I have finally gotten a chance to look at Pruss's chapter on QM. It seems to me that he succeeds - much too well.

Pruss has a good grasp of the quantum mechanical objection. He even presents a version of the famous EPR-Bell inequality paradox. He goes on to suggest several ways of reconciling QM with the PSR, including non-local causation, or backward-in-time causation, and hidden variable theories. With regard to the latter, he acknowledges that Bohmian theories run into problems with relativity, and are not a really satisfactory replacement for standard quantum field theory. But, he says, it is possible to show that some such theory could, in principle, explain the QM results.  

For instance, take a neo-Leibnizian theory that says that every point of space is a monad, and this monad has encoded within it a list of all the events that will happen throughout time at that point and through an internal causal process it goes deterministically through these events as time passes. 
Now, it seems obvious to me that this solution achieves too much.  For this could be said of any conceivable pattern of events in any conceivable universe. No matter how random, lawless, and chaotic, those events could be described in Pruss's monad theory as deterministic and causal. So this solution makes the PSR trivial, and therefore uninteresting. If every possible pattern of events satisfies the PSR, then the PSR has no content.

If the PSR means anything at all, then it needs a more rigorous notion of "reason" and "causality" than Pruss is employing here.

Saturday, April 6, 2013

Supernatural Times Two

Two interesting things I've come across about defining the supernatural:

The first is a paper by Fishman and Boudry that makes the point that methodological naturalism is not an a priori commitment of the scientific enterprise; rather, it is a conclusion arrived at in the same way as other scientific results, namely, by applying the usual criteria of economical explanation of phenomena. They argue that "supernatural," as it is commonly used, is too loose a term to be useful. Instead, they say, we should talk about the "overnatural" - beings or powers that are similar to natural ones, but beyond what is normally possible (like Superman or walking on water) - and the "transnatural" - things that are "categorically different from ‘natural’ ones, so much so that their properties are essentially mysterious, ineffable, and incomprehensible."

The overnatural can be investigated the way we investigate any scientific hypothesis. The transnatural, on the other hand, cannot be scientifically investigated, since it (by definition) is incomprehensible and inexplicable. However, they argue, the transnatural is an empty concept. They quote Martin Mahner:

There neither is an ontological theory proper of the transnatural nor could there be, because there can be no theory of the unintelligible.
Scientists thus reject a transnatural explanation, not because of any a priori commitment, but for the same reason they reject any other unintelligible or ill-formed hypothesis.

The second bit is a blog post by Victor Reppert, who argues that the supernatural is a claim that minds are not composed of non-mental things - they are part of the "rock bottom level of the universe." This is really a thesis about dualism rather than a definition of the supernatural. But I think Reppert is right to focus on minds in talking about the supernatural.

Imagine that large boulders spontaneously levitated at random times. Such a phenomenon would be inexplicable according to current science, but I don't think it would be considered supernatural. It would just be another kind of natural phenomenon to be described. On the other hand, suppose they only levitated when a car was about to crash into them. Then we would suspect a purpose behind the levitations, and a mind behind that purpose. Since the phenomenon is beyond the ability of any human, we would have to suspect a supernatural being. (Or, I suppose, a powerful extra-terrestrial who can know about these events in advance and who has the ability to move rocks from a distance. In effect, such a being would be supernatural from our point of view.)

This all ties in to the discussion about the fine tuning argument for naturalism, and what sort of observations would be considered evidence of supernatural intervention - but I'm still thinking about how.

Saturday, March 16, 2013

Cosmic De-tuning

I really appreciate all the comments on my Fine Tuning Argument for Naturalism (FTAN). Ben Yachov made the excellent point that a constant tweaking by God of the natural laws would itself appear to be a sort of natural law, and not a miracle. Tyler asks about the amount of parameter space that would be gained by God's intervention, and would it really be as large as I am suggesting. He writes, "...we need the numbers to argue concretely." Together, these two criticisms make a serious challenge to my argument. I have tried to answer them in brief in the comments, but here I will try to give a fuller account, complete with numbers.

Ben's point is a very cogent one, and requires careful consideration. If, for example, the strong force were too weak for atomic nuclei to hold together, and yet God held them together, how could that be seen by physicists as anything but a natural law - a strong force, or an additional force, strong enough to hold nuclei together? I feel that, in principle, an omnipotent God ought to be able to allow life to exist in a non-law-like way. But as I have nothing other than a law-like universe to refer to for examples, I think it would be hard to argue this point. So, I will (partially) concede Ben's point. Unfortunately for me, this means my FTAN in its generic form can't go through. Still, I think I can save the argument by appealing to particular cases where Ben's complaint can be circumvented.

First let me point out that, although science has revealed a unified and consistent picture of natural laws, that need not have been the case. A  couple decades ago there seemed to be a developing contradiction about the age of the universe. Astrophysicists modeling the evolution of stars were coming up with ages for those stars that were longer than the age of the universe that cosmologists were suggesting. Of course, creationists got very excited about this and began crowing about how the Big Bang model was inconsistent and so forth. In the event, the star ages got revised downwards and the universe age upwards, so that today there is no longer any contradiction. But that need not have happened. It could have happened that astrophysicists, applying their methods, came up with ages that were much larger than the age that cosmologists, using their own distinct methods, allowed for the age of the universe. Thus science would have had a conundrum. Methodological naturalism would prevent scientists from resolving this conundrum by postulating supernatural intervention. But the conundrum would be available for anyone who wanted to argue for a supernatural being or cause.

What does this have to do with Ben's point? Well, in a universe that runs by rigid natural laws, there is nothing surprising in the fact that the laws give consistent answers to questions like "How old is the universe?" But in a universe created by an omnipotent being, such consistency is surprising - because there are so many other ways God could have created the universe. For instance, by injecting stars that are already in an advanced stage into an already-expanding universe. In this, the young earth creationists are logically correct: God could have created a universe that was only 6000 years old, but which had the "appearance of age." And there is no particular reason that the ages, as deduced from different sources using the (incorrect) naturalistic assumption, would agree with each other. So, in addition to Ben's suggestion that miraculous intervention appears as natural law, there is another possibility:  that a miraculous intervention would show up as a conundrum, i.e., an apparent contradiction between natural laws. I will give some other examples below.

The second general point to make is that there is nothing in our known natural laws - apart from the supposed fine tuning of the parameters of those laws - that seems to prefer life to non-life. Given an omnipotent God, it is possible that there would be either special natural laws for life that differ from those for non-living things, or special exceptions to otherwise uniform natural laws that permit life in places where it would otherwise be impossible. The latter would (per Ben) appear as natural laws - but laws of a very particular type (see below) and of a type we do not see in our actual universe.

OK, now on to three specific examples to show what I mean, and to give some numerical values to answer Tyler.

In discussing these examples, I will use the term de-tuning to describe the amount of parameter space that is opened up by the God hypothesis. Let's call the naturalistically life-permitting range of the parameter the n-range, and the range of the same parameter for which an omnipotent God could allow life (like ours) to arise the g-range. Then define the de-tuning to be the ratio n-range divided by g-range. According to the assumptions of the Fine Tuning Argument, taken over into the FTAN, this ratio represents the probability that the parameter will fall in the naturalistically life-giving range, assuming theism to be true, and all other things being equal.

(Of course, God could presumably allow life unlike life on Earth to arise in a much wider range of parameters, so this ratio is a conservative estimate of the de-tuning.)

1. Cosmological Constant: The cosmological constant is a favorite target of fine-tuning arguments. It is a constant that appears in Einstein's General Relativity equations. A non-zero cosmological constant acts like a uniform energy density spread throughout space, and therefore is a candidate for the so-called "dark energy" of the universe.

If the cosmological constant is large and negative, the universe will re-collapse on itself too fast for galaxies, and therefore life, to form. If the cosmological constant is large and positive, the early universe will expand too fast for galaxies to form - the primordial hydrogen spread throughout space will not have time to collapse under its own gravity.

Now, it turns out that no matter how large the cosmological constant is, the expansion will not tear apart structures that have already formed. So, an omnipotent God could create a universe with a positive cosmological constant, then miraculously collect enough primordial matter into one place for a galaxy to form. Life could then evolve naturalistically without any further intervention. It doesn't matter how large the constant is, this scenario remains possible for any positive value. Thus, the g-range is, in principle, infinite. But if we follow Collins and use the Planck scale as the reference range, then we get a conservative estimate for the de-tuning that is essentially the same as his result for the fine-tuning, which he gives as one part in 10120.

This example shows that the de-tuning is not just a matter of slightly expanding the range of some parameter, as Tyler seems to suggest. The g-range is vastly larger than the n-range, and therefore (by the logic of the Fine Tuning Argument) the probability of finding our universe lying in the n-range is vastly smaller under the theistic assumption.

2. Origin of life on Earth: It is possible that natural laws allow life to exist, but don't allow life to arise from non-life. In this case, God could initiate life on Earth by "seeding" Earth with the original life-form(s), which then could go on to evolve into a variety of different forms in a naturalistic way.  There is obviously an infinite number of ways this could happen: God could start with just a single reproducing cell, or she could seed the earth with a whole range of flora and fauna.

There is no way to capture the infinite possibilities in a single number, but we can get a sort of very conservative limit on the de-tuning by considering the age of the Earth. It seems that naturalistic processes required 3 billion years to produce complex life. The Sun is expected to shine for about 10 billion years before becoming a red giant. Naturalistically, then, we should expect the age of the Earth to be between 3 and 10 billion years at the time intelligent life arises. But under theism, it could be anything from nearly zero to 10 billion years. This gives a not-very-impressive detuning of 0.7.

3. Distance of the Earth from the Sun: The earth-sun distance doesn't make a good candidate for a fine-tuning argument: given the large number of planets in the galaxy it is highly likely that some of them will be at the "right" distance from their star for life to arise: the region called the "habitable zone."  (Nonetheless, some folks still cite it.) However, we can turn it into a de-tuning parameter as follows.

A planet too close to its star for life could be miraculously protected from the excess radiation by a sort of blanket surrounding the planet. At this blanket, energy disappears from the universe. Elsewhere, I assume, the laws of physics are the same as in our universe, so energy is conserved everywhere except at this blanket. Thus, the planet could be much closer to the star than the habitable zone, and still contain life. Moreover, the blanket could instead provide energy to the planet, and so life could exist on a planet farther from the star than the habitable zone, too. Indeed, such a planet would not need a star at all, so it could be wandering through interstellar space.

To estimate the de-tuning in this case, note that the Wikipedia page cites the habitable zone as about 1.4 AU wide, providing the n-range. The g-range extends from the surface of the star out into interstellar space; we can take it be half the average distance between stars, or about 2 light-years. So the de-tuning is about 10-5. (Again, I am making a very conservative estimate. There is no reason such a planet couldn't wander about in inter-galactic space. Allowing that scenario would give a much more impressive de-tuning factor.)

4. Chemistry for life:  Lastly, consider the possibility that the laws of chemistry are not (fine) tuned for life, but God changes the rules so that atoms and molecules behave differently when in living things than in non-living things. Per Ben's dictum, this would appear to us as a natural law: the chemistry of living things is fundamentally different from that of non-living things. (It would give rise to conundrums, too. Molecules are constantly passing from inside the body to outside via respiration, perspiration, etc., and we would have no naturalistic way of explaining the chemical change when they do.)

I don't know how to put a de-tuning value on this possibility, but obviously there are many more ways life could exist under this scenario than in a world with a single unified set of chemical laws.

I think I have shown that divine interventions vastly expand the possibilities for life to exist in the universe. These interventions could show up either as conundrums - apparent contradictions in the laws of nature, as in the first two cases above - or as natural laws that single out living things in a special way. Strictly speaking, only the former count as expanding the parameter ranges that allow for life to exist, since the latter may be considered as "naturalistic" scenarios. But special natural laws for life is just what we don't see in our universe: we don't see planets wrapped in energy-destroying bubbles, or see molecules that behave differently inside a living thing than outside.

Put the point the other way around (and more in the spirit of the FTA), and ask what would we expect to see under the theistic hypothesis? Other things being equal, we would expect to see lots of conundrums and lots of special-exception type laws that allow life to exist, because there are many more ways to make a life-containing universe that include those than there are ways that exclude them. The fact that we don't see such laws and conundrums, then, is evidence against the theistic hypothesis.

Monday, February 25, 2013

Collins vs. Stenger

Robin Collins has written a paper responding to some criticisms Victor Stenger made of the Fine Tuning Argument.

Collins starts out by conceding to Stenger that many of the claimed fine tunings do not qualify  as such. He focuses on a few examples where he thinks Stenger's arguments fail, basing his argument on a close consideration of the physics involved. Collins is a philosopher but, according to his CV, did some grad work in physics.  Stenger is himself a physicist. Collins accuses Stenger of getting the physics wrong. So who's right?

For this post, I'm only going to consider their remarks about the relative strengths of the various physical forces. I haven't read Stenger's book, The Fallacy of Fine Tuning (FOFT), so I'm basing my comments on this paper and Collins's claims about FOFT. I also should make clear that I think the Fine Tuning Argument is a bad argument overall. But I think it might be worthwhile for me as another physicist to try to evaluate the physics part of these arguments.

On the strength of gravity, Stenger writes,

The reason gravity is so weak in atoms is the small masses of elementary particles. This can be understood to be a consequence of the standard model of elementary particles in which the bare particles all have zero masses and pick up small corrections by their interactions with other particles.
Collins responds,

Although correct, Stenger’s claim does not explain the fine-tuning but merely transfers it elsewhere. The new issue is why the corrections are so small compared to the Planck scale. Such small corrections seem to require an enormous degree of fine-tuning, which is a general and much discussed problem within the Standard Model.
Collins is correct: the only natural energy scale in terms of fundamental physical constants is the Planck scale, and we have as yet no understanding of why the proton and neutron masses should be so small compared to the Planck scale. (I should point out, though, that when physicists talk of a parameter being "fine-tuned" it has nothing to do with being fine tuned for the existence of life. Rather, it is a matter of fine tuning for the observed physics of the universe.)

With regard to the relative strength of gravity compared to other forces, Stenger writes,

The gravitational strength parameter αG is based on arbitrary choice of units of mass, so it is arbitrary. Thus αG cannot be fine-tuned. There is nothing to tune.

Now, I have to say I find this statement very unclear.  αG is a dimensionless parameter: it doesn't depend on any choice of units. It is defined as

αG ≡ G(mp)²/ℏc,

where G is the gravitational constant, m_p is the proton mass,  ℏ is Planck's constant, and c is the speed of light. No matter what system of units you use to measure those quantities,αG will have the same value.

Collins, who has read FOFT, says that Stenger means the we can replace the proton's mass in αG by the mass of some other fundamental particle. Collins, correctly, points out that this is irrelevant to the question of whether αG as defined is fine tuned. In the absence of any reasonable alternative interpretation I have to agree with Collins again: changing from one parameter to a different parameter can't save you from fine tuning of the original parameter.

Collins goes on to give a rather involved discussion of how various physical properties scale as we allow αG to change. This is a sophisticated bit of argument; Collins pulls in arguments based on biology, plate tectonics, and planetary science. I tried hard to find some flaws in this analysis, but only came up with a few minor quibbles.

Stenger does make an important point that Collins simply ignores. He points out that if we just change one parameter, that parameter might appear to be fine-tuned. But if we allow for two (or more) parameters to vary at the same time, there might be a much wider range of values that allow for life. For instance,

The relative values of α and the strong force parameter αS also are important in several cases. When the two are allowed to vary, no fine-tuning is necessary to allow for both nuclear stability and the existence of free protons.
As I said, Collins makes no comment about this claim. There is an obvious counter to it, though: if we increase the number of parameters that vary, we also increase the available parameter space. Even if the life-permitting range of some value is increased in this way, the relative volume of parameter space might still be small.

What's worse, Stenger goes on in the very next paragraph to step on his own toes:

There are two other facts that most proponents of fine-tuning ignore: (1) the force parameters α, αS, and αW are not constant but vary with energy; (2) they are not independent. The force parameters are expected to be equal at some unification energy. Furthermore, the three are connected in the current standard model and are likely to remain connected in any model that succeeds it.

If Stenger is right here, then the values of α and αS cannot be varied independently. So which is it?

First off, I don't know what Stenger means by, "the three are connected in the current standard model." In the current standard model, the three are independent parameters and can be varied independently.

Second, it is true that these couplings change with the energy scale at which they are measured: physicists call this "running coupling constants." But it's not clear to me why Stenger thinks this is relevant. Is he suggesting that at some vastly different energy scale, they might again have the correct ratios to allow for life? I'm not sure what the point of this remark is.

Thirdly, "The force parameters are expected to be equal at some unification energy." This is true in grand unified models (GUTs) and in some supersymmetric models, but these have not been verified experimentally and so remain highly speculative. Anyway, this causes problems for Stenger, because if these parameters are related by physical theory then they can't be varied separately and so his argument about varying two parameters at the same time is no longer available. 

Finally, let's return to Collins. He writes,

Next, I define a constant as being fine-tuned for ECAs [embodied conscious agents] if and only if the range of its values that allow for ECAs is small compared to the range of values for which we can determine whether the value is ECA-permitting, a range I call the “comparison range.” For the purposes of this essay, I will take the comparison range to be the range of values for which a parameter is defined within the current models of physics. For most physical constants, such as the two presented here, this range is given by the Planck scale, which is determined by the corresponding Planck units for mass, length, and time.

Taking the Planck scale as defining the comparison range is badly wrong, for two reasons.

First, the Planck scale sets a limit on current physics in the sense that we expect it to break down around that scale. But in fact we have good reason to think current physics breaks down very much before that scale. Physicists have high hopes that the LHC will reveal "new physics," at an energy that is a factor of 1012 below the Planck energy. This has not yet happened, unfortunately. But to suppose that we can understand physics at energies all the way up to the Planck energy is just way off.

Secondly, even apart from the issue of new physics, it's absurd to suggest that we can determine whether ECAs are possible for values of parameters that differ greatly from the values we know as the actual ones. Think about it like this: if someone handed you the equations of the Standard Model and of general relativity, together with the values of the constants therein, would you be able to predict the existence of complex life forms? Certainly there are some ranges of some parameters that let us rule out complex life. For instance, if the cosmological constant is to big, then the universe will expand so fast that matter will never be able to clump together into stars and planets. But changing the ratio of (say) electromagnetic to gravitational force is a very delicate matter, and all sorts of unforeseen possibilities might arise, especially for values very far from the true values. Sure, you could say that life like ours would be impossible under those parameters. But the argument is supposed to be about embodied conscious agents in general, not just life like ours.

What's strange is that Collins himself took a much more modest view of the comparison range in a previous paper. There he showed a sophisticated appreciation of both of the points I just made. For example, he writes,

One limitation in the above calculation is that no detailed calculations have been performed on the effect of
further increases or decreases in the strong and electromagnetic force that go far beyond the 0.5 and 4 per
cent, respectively, presented by Oberhummer et al. For instance, if the strong nuclear force were decreased
sufficiently, new carbon resonances might come into play, thereby possibly allowing for new pathways to
become available for carbon or oxygen formation.

He introduces the "epistemically illuminated range" for a parameter: that range for which we can calculate with reasonable assurance of success whether a given value allows the formation of complex life. He applies this procedure to the fine tuning of the strong force for carbon and oxygen production in stars, and comes up with a not-very-fine-tuned value of 0.1. Here, too, he treats only of the possibility of there being life like ours, and makes no attempt to address whether some very different sort of complex life might arise.

To sum up, I think Collins has done a pretty good job of pointing out problems with Stenger's analysis. His treatment of the physics here is a big improvement over some of his earlier work. But his arguments aren't enough to establish his claim. Collins's chosen "comparison range" is certainly too large to be reasonable. And his arguments only address the possibility of life that is substantially similar to ours. Yet vastly different forms of life might be possible in other parameter regions: we simply don't have the sophistication to predict their existence from bare physical laws. (It's possible that life-permitting regions might be scattered, fractal-like, through the parameter space, so that at any life-permitting point small changes don't allow life, while overall the probability of life is quite large.)