Tuesday, May 28, 2013

What's Wrong With Fine Tuning?

I've been saying for a while that I think fine tuning arguments are bad arguments. I want to explain that.

Suppose I shuffle a deck of cards and let you draw one. It is the queen of hearts. What was the probability that you would draw that card? Think carefully before answering!


Did you answer "one in 52"? If so, you're wrong. It was a pinochle deck, so your chance was one in 24, not one in 52.

OK, now I shuffle a different deck and you draw one. What's the probability that you draw the queen of hearts now?

If you responded, "I don't know, you didn't tell me what deck you are using," you are correct! In fact, this deck was a magician's deck in which every card is the queen of hearts. So then probability was 100%.

One more: I hand you a card with a bunch of unknown symbols on it. What was the probability that you got that exact card?

I'm sure it's clear where I'm headed with this. The only possible answer that makes any sense is "I don't know." The probability is, as the philosophers say, inscrutable.

You could, of course, make up some probability calculation. Say I pixillate the card by covering it with an n by m grid. Then I count how many pixels have black ink. Say I find M pixels. Next, I work out the number of ways that M things can be chosen from N = n*m things. Mathematicians call this number "N choose M." Finally, I declare that the probability that I got this particular card is one in N choose M.

That calculation would make sense if I knew that this card was produced by a machine that randomly put spots of ink in a grid on the card. But I don't have any reason to think this card was produced by such a machine. In fact, I have good reason to think it wasn't produced by such a machine, because if it was it is highly unlikely that the result would look like recognizable symbols.

There are several mistakes here that fine tuning arguments make. First of all, a probability calculation only makes sense if you have some idea of the probability space - the space of possibilities from which the output is drawn. The probability space is the "deck." If I have no idea what deck is being used, then I have no idea what the probability is for any given outcome.

We don't have a lot of universes to take data from - we only have one. So we don't have any idea of what the "deck" is: what sort of variation is possible for the supposedly fine tuned parameters.

 Secondly, a probability calculation only makes sense if you have reason to believe that a probabilistic process is occurring. In the case of symbols on a card, we don't have any reason to think that the ink was deposited according to a random process. Likewise, in the case of the parameters that set our scientific description of the universe, we have no reason to think that they were determined by a random process. (In some string theory-based models, the parameters are reshuffled randomly when a baby universe is born. But in these models, there is an infinity of such baby universes, so the model provides the solution already. In any case, these models are highly speculative.)

Folks arguing via fine tuning begin by making up a probability space: for instance, Collins's "epistemically illuminated range." Then they pixillate that range in an arbitrary way, usually by assuming a uniform distribution of probability over the range. This leads them to some value for the probability of that particular parameter - but that probability is completely arbitrary, just as much as the probability we calculated for the card with strange symbols on it.

Bottom line: unless you have reason to believe that the "fine tuned" parameter was chosen by a random process from some given range, any calculation of probability for the value of that parameter is meaningless and arbitrary.

6 comments:

  1. Multiple problems ...

    * "OK, now I shuffle a different deck and you draw one. What's the probability that you draw the queen of hearts now? If you responded, "I don't know, you didn't tell me what deck you are using," you are correct!"

    Nope. The whole point of probability is to quantify your uncertainty. That includes the uncertainty over what kind of deck is being shuffled as well as what card is drawn. If you say: "Suppose I shuffle a deck of cards and let you draw one. It is the queen of hearts (QoH). What was the probability that you would draw that card?", then if I'm taking into account all the sources of uncertainty, I do a calculation like:

    P(QoH) = P(QoH | regular deck) x P(regular deck) + P(QoH | pinochle deck) x P(pinochle deck) + P(QoH | magician's deck) x P(magician's deck) + P(QoH | other deck) x P(other deck)

    The calculation is correct if I correctly take into account the information I have on what sort of deck it is. The calculation is not wrong because I don't have the same information you have. The problem with the answer 1/52 is that it doesn't consider all the relevant information. And even then, most of the time when someone has a deck of cards it is a regular deck, so I have good reason to set P(regular deck) close to one, in which case P(QoH) will be close to 1/52. When you start producing other kinds of decks, my probabilities for these cases P(pinochle deck), P(magician's deck) change appropriately - i.e. according to Bayes theorem. Probabilities don't become inscrutable because they depend on things we don't know. *All* probability calculations depend on things we don't know. That's why we do probability calculations.

    ReplyDelete
    Replies
    1. Hi Luke, thanks for your comments.

      But there are infinitely many other decks. Sure, you can write down a formula like that, but without some constraint on the probability space, the formula doesn't help at all.

      Yes, probability depends on things we don't know, but they are "known unknowns." You can't do a probability calculation for "unknown unknowns." See below for more.

      Delete
  2. Also ...
    * "We don't have a lot of universes to take data from - we only have one. So we don't have any idea of what the "deck" is: what sort of variation is possible for the supposedly fine tuned parameters."

    Yes we do, because we can do theoretical physics. Fundamental parameters are not set by the theory in which they appear, and so their possible range is set by the range over which I can predict what the resulting universe would be like. For initial conditions, it's even more straightforward. Physical theories usually involve equations, the solution to which describes a particular universe. Thus, all theories carry with them a set of possible universes in the form of the set of solutions to the equations. Initial conditions are the way in which a particular solution is usually specified. Such calculations involving universes that are not ours are not just done in the context of fine-tuning. They are essential to all physics. The probability of a theory T being correct given a certain set of data D is, by Bayes theorem (http://en.wikipedia.org/wiki/Bayes'_theorem#Extended_form):

    P(T|D) = P(D|T) P(T) / [P(D|T) P(T) + P(D|notT) P(notT) ]

    Note P(D|notT). This is the probability that we would observe data D given that our theory is not true. Even if we had the correct theory T, we would still need to calculate what D we would expect in other possible universes. Such calculations are essential to testing physical theories with data, the cornerstone of science. It is these calculations that fine-tuning uses. If P(D|notT) is inscrutable then so is P(T|D), and science is unable to learn anything at all from data.

    * "a probability calculation only makes sense if you have reason to believe that a probabilistic process is occurring."

    Nope. Probability quantifies our ignorance. We can be ignorant of facts even if those facts are the result of a deterministic process. That's why probability is so useful to classical statistical mechanics, even though there are no inherently "probabilistic processes" (i.e. quantum) involved. If your claim were correct, classical statistical mechanics would not make sense. More generally, a police detective can consider the probability of murder vs. suicide without supposing that the suspicious death was the result of a probabilistic process.

    ReplyDelete
    Replies
    1. On the second point, it is crucial to classical statistical mechanics that the dynamics is ergodic, i.e., (effectively) probabilistic. If, for example, I have a gas of N non-interacting particles, all bouncing in paths perpendicular to two perpendicular walls, then the dynamics is not ergodic and the conclusions of stat mech are incorrect.

      Yes, a police detective can consider the probability of various scenarios, but only because s/he has experience of many similar cases. Without such experience, (personal or archival), any probability estimate would be purely arbitrary.

      More to the point, though, is that it is precisely because the death was NOT a result of probabilistic processes that the detective sometimes comes do a conclusion that flies in the face of the probabilities. In situations of type X, probability says it is suicide 99% of the time, but even though this particular situation is of type X, there are details that lead the detective to the conclusion of murder.

      On the first point, I need to think some more before replying.

      Delete
    2. OK, so on your first point about Baysian updating. This is partly a matter of Baysian vs. frequentist interpretations of probability, which we aren't going to resolve here. One of my complaints about Baysian probability is precisely the issue of how to calculate, or even estimate, the various Ps in that formula. But let me make a few remarks.

      I've never seen a scientific paper that gives the Baysian update factor. (This may be simply my own ignorance. Please let me know if you know otherwise.) Instead, what scientists do is to compare different theories. That is, they compare P(D|T1), P(D|T2), etc, where T1 and T2 are competing theoretical possibilities.

      Still, I think you are probably right that scientists use something like a Baysian update, in an informal, heuristic way, to evaluate theories. But, following on the previous comment, what we can do is expand the sum in the denominator to include all the Tn for theories we have a reasonable expectation of correctness, and then assign an arbitrary small value to the probability of (not)(any of these theories Tn). That allows a Baysian update without the need for a detailed calculation of P(D|notT).

      Finally, you may be right that requiring a "probabilistic process" is too strong, but you haven't convinced me of it yet.

      BTW: Am I correct in assuming you are the Luke Barnes of this paper?

      Delete
  3. A good piece, I think, but then I am no fan of Bayes' theorem -- not as to the theory itself, mind you, which I leave to the mathematicians among you, but as to its potential for philosophic misapplication. The idea that anyone would apply Bayesian probability to actually resolve an existential question, as opposed to a mere scientific one, seems remarkably illogical to me. Bayes may have made a vast improvement over Pascal, but we still know far too little about the actual probabilty space out "there", as you have pointed out.

    Any thoughts on the current signifcance of the Higgs boson? You philosophers of science have not seemed to have written much on that subject (that I can find). Best wishes.

    ReplyDelete