The Fine-Tuning of the Universe for Life
1 Introduction: Fine-Tuning in Physics
When a physicist says that a theory is fine-tuned, they mean that it must make a suspiciously precise assumption in order to explain a certain observation. This is evidence that the theory is deficient or incomplete. As a simple example, consider a geocentric model of the Solar System. Naively, at any particular time, the Sun and planets could be anywhere in their orbits around the Earth. However, in our night sky, Mercury is never observed to be more than 28∘ from the Sun, and Venus is never seen more than 47∘ from the Sun.
Can a geocentric model explain this observation? Yes, but only by adding a postulate. In Ptolemy’s geocentric model, Mercury and Venus travel on epicycles, and those epicycles are centred on a line joining the Earth to the Sun (Figure 1). This explains the data, so the model does not fail. However, in the context of the model, this assumption is unmotivated and suspiciously precise. Given only that the planets and Sun orbit the Earth, there is no reason to expect such an arrangement.

This fine-tuning of the geocentric model doesn’t necessarily mean that it is wrong, but it should make us wary. We should search for a model in which the data is explained more naturally: Mercury and Venus are never seen too far from the Sun because the planets orbit the Sun, not the Earth.111This isn’t how it happened historically, but it does illustrate the principle.
Similar arguments play an important role in modern cosmology and particle physics. A standard cosmology textbook case for cosmic inflation goes as follows (e.g.. Peacock, 1998). In the standard model of cosmology, the geometry of the universe can be negatively curved, flat, or positively curved, depending on whether the density universe is less than, equal to, or greater than the critical density. In this model, two facts seem to be in tension with each other. Firstly, the matter in the universe causes the density of the universe to evolve away from critical. Secondly, observations tell us that the density of the universe is very close to critical.
What about in the past? If we extend the model back to nucleosynthesis, about 1 second after the beginning, then the density of the universe must be within one part in of the critical density in order to still be close to critical today. The further we push back, the closer the constraint: at the Planck time, it is one part in . As with Ptolemy’s model, the standard model of cosmology can explain the data, but only with an unmotivated and suspiciously precise assumption. We must simply assume that the density of the universe was extremely close to critical in its earliest moments. This motivates inflationary models, in which a early period of accelerating expansion drives towards critical density (see Ijjas in this volume).
A second example comes from particle physics (Dine, 2015). The observed mass of the Higgs particle can be written in terms of a “bare” value and quantum corrections. These quantities are independent in the model. However, the size of the quantum corrections diverges quadratically with the scale up to which the effective theory can be trusted. Dine says, “if the cutoff is the Planck scale, this correction is enormous …about thirty four orders of magnitude larger than [the observed value], corresponding to a fine tuning of the bare parameters against the radiative correction at the part in level.”
Again, the model can explain the observed value, but only by making the unmotivated and suspiciously precise assumption that the bare value almost perfectly cancels out the quantum corrections (see also Donoghue, 2007). Particle physicists tend to call this situation “unnatural” rather than fine-tuned, but it’s a similar idea. As Dine notes, “naturalness has for many years been a guiding principle in the search for physics beyond the Standard Model”.
The assumptions underlying these arguments have been the subject of much theoretical attention, but the logic is quite widely accepted. The cosmological constant problem, the flatness problem, the big- and little- hierarchy problems of particle physics (see Jacquart in this volume) and the strong CP problem (see Ijjas in this volume) can be framed as fine-tuning problems.
One particular case of fine-tuning is particularly striking. The data in question are not the precise measurements of cosmology or particle physics, but a more general feature of our universe: it supports the existence of life. Before we look at this in more detail, it will be helpful to place fine-tuning in the context of Bayesian approaches to testing physical theories.
2 Bayesian Accounts of Fine-Tuning
The Bayesian approach to probability theory views probabilities as quantifying the degree of plausibility of some proposition, given other propositions. Bayesians have argued that the familiar probability axioms of Kolmogorov (1933) (or similar) also apply to degrees of plausibility. This can be shown via Dutch book arguments, representational theorems that trace back to Ramsey (1926), or (more common among physicists) the theorem of Cox (1946), which proposes that degrees of plausibility obey some intuitive desiderata (see also Jaynes, 2003; Caticha, 2009; Knuth & Skilling, 2012).
In the Bayesian approach, physical theories are tested as follows. Let,
-
•
= the proposed theory to be tested. As a concrete example, may represent a set of symmetry principles, from which we can derive the mathematical form of a Lagrangian (or, equivalently, the dynamical equations), but not the values of its free parameters.
-
•
= our observations of this Universe.
-
•
= everything else we know. For example, we treat the findings of mathematics and theoretical physics as given, so these are included in . As I have defined it for our purposes here, the information in does not give us any information about which possible world is actual. The theoretical physicist can explore models of the universe mathematically, without concern for whether they describe reality.
We then would like to know how plausible is, in light of everything that we know . If the posterior probability — read “the probability of given and ” — were to descend to us on a cloud from the heavens, then our job would be done. Alternatively, we may need some help in calculating the posterior, and so we turn to Bayes Theorem,
(1) |
If the theory in question has free parameters, which we generically denote , then we must take into account our lack of knowledge of these parameters in evaluating the likelihood of the data given the theory . We can think of this as dividing the theory into a large number of sub-theories, each with a different value of the free parameters. To calculate the likelihood, we need to average over these sub-theories — this is known as marginalizing over nuisance parameters. Sub-theories that can account for the data bring the average up, and sub-theories that can’t bring the average down.
As a simplistic model, suppose a free parameter varies uniformly over a range , but only a small range is consistent with the data. Then the theory’s likelihood is penalized by a factor . The smaller the range of free parameters that accounts for the data, relative to the range of the parameters dictated by the theory, the more the likelihood is penalized. Fine-tuning can be translated directly into improbability within a Bayesian approach (see also Aguirre, 2007; Fowlie, 2014; Barnes, 2017, and references therein).
3 The Fine-Tuning of the Universe for Life
Part of exploring any physical model is calculating the effect of varying its free parameters. As we have seen, this is necessary for calculating the likelihood of the data given the theory (via marginalizing), and so this can tell us whether the theory is fine-tuned or not. Beginning in the 1970’s, physicists noted that seemingly small changes to the fundamental constants of nature and the initial conditions of the cosmos not only brought our models in conflict with precise measurements; they described universes in which no life form could exist. The complexity and stability required by any known or thus-far conceived form of life could be rather easily erased.
This fine-tuning of the universe for life was first investigated by Carter (1974), Silk (1977), Carr & Rees (1979), Davies (1983), and Barrow & Tipler (1986), and has been reviewed recently by Hogan (2000), Barnes (2012), Schellekens (2013) and Lewis & Barnes (2016). We will consider a few examples.
3.1 The Cosmological Constant
The cosmological constant problem is described in the textbook of Burgess & Moore (2006) as “arguably the most severe theoretical problem in high-energy physics today, as measured by both the difference between observations and theoretical predictions, and by the lack of convincing theoretical ideas which address it.” The problem is as follows. Quantum field theory describes particles as configurations of a field. There is a particular configuration of the field that corresponds to a state with zero particles; this is known as the vacuum state. Because the field is still there, we can ask: how much energy is contained in the vacuum?
The absolute energy of the field doesn’t effect the interactions of the standard model of particle physics, which depend only on energy differences. But gravity, on Einstein’s theory, responds to the absolute amount of energy. In a homogeneous and isotropic universe, vacuum energy has the same effect as Einstein’s cosmological constant. When cosmologists speak of the cosmological constant, they usually mean the sum of the “bare” cosmological constant in Einstein’s equation and all the forms of energy in the universe that behave in the same way. This is the quantity that is constrained by cosmological data. In Planck units () and expressed as a density, the observed cosmological constant has the value .
We can estimate the contribution to the energy in the vacuum from a given quantum field. Loosely speaking, even in the vacuum state, virtual particles will be created and annihilate, forming loops in a Feynman diagram. The vacuum energy depends on the energy scale up to which we trust the theory to describe this process. Even if we only consider well-understood fields (e.g. the electron field) up to energy scales that have been thoroughly investigated by experiment (say, GeV), the contribution to the vacuum energy is , or 55 orders of magnitude larger than the observed value. If we extend the range of our theory up to a popular energy scale where new physics is expected, the supersymmetry scale, then the contribution to the vacuum energy is . If we extend all the way to the Planck scale, where cannot trust our theories because they do not account for quantum gravity effects, the contribution to the vacuum energy is , 123 orders of magnitude larger than the observed value.
This is a fine-tuning problem. Quantum field theory and general relativity can explain the small observed value of the cosmological constant, but only by supposing that the different (positive and negative) contributions to the vacuum energy from each quantum field happen to cancel each other to 123 decimal places. This requires an unmotivated but suspiciously precise coincidence between a number of independent factors.
As an example of fine-tuning for life, the cosmological constant problem is a near-perfect storm.
-
•
It’s actually several problems. Each quantum field –– electron, quark, photon, neutrino, etc. –– adds a very large (positive or negative) contribution to the vacuum energy of the universe.
-
•
General Relativity won’t help. Einstein’s theory links energy and momentum to spacetime geometry. It does not dictate what energy and momentum exists in the universe. Universes that are no good for life are perfectly fine by the principles of General Relativity.
-
•
Particle physics probably won’t help. All particle physics processes, being described by quantum field theory, depend only on energy differences; only gravity responds to absolute energies. Thus, particle physics is largely blind to its effect on cosmology, and thereby life.
-
•
It isn’t just a problem at the Planck scale, so quantum gravity won’t necessarily help. As noted above, we don’t need to trust quantum field theory all the way up to the Planck energy in order to see the cosmological constant problem. It is entrenched firmly within well-understood, well-tested physics.
-
•
Alternative forms of dark energy have very similar problems. They usually posit some other kind of field, and so the problem of the vacuum energy of the field remains, unchanged and unsolved. See Jacquart (this volume) for more discussion of dark energy.
-
•
We can’t aim for zero. Before the accelerated expansion of the universe was discovered 1998, it was thought that some principle or symmetry would set the cosmological constant to zero. Even this was a speculative hope, and it has since evaporated.
-
•
The quantum vacuum has observable consequences, and so cannot be dismissed as mere fiction. In particular, an electron in an atom feels the influence of the quantum vacuum (the Lamb shift). Our theory works beautifully for electrons and atoms. Why doesn’t cosmic expansion feel the influence of the quantum vacuum?
-
•
The cosmological constant has a very obvious and definitive effect on the necessary conditions for life. A positive cosmological constant causes the expansion of the universe to accelerate, freezing structure formation. Make the cosmological constant a few orders of magnitude larger and structure formation freezes before anything has formed. The universe will be a thin, uniform hydrogen and helium soup, a diffuse gas where the occasional particle collision is all that ever happens. A very simple way to make a universe lifeless is to make it devoid of any structure whatsoever. Alternatively, a negative cosmological constant causes the universe to recollapse. If the cosmological constant were , then the universe would recollapse seconds after the big bang.
3.2 The Parameters of the Standard Model
The standard model of particle physics has 25 free parameters which are constrained by experiment. Many of these play a crucial role in providing the complexity required by life.
The Higgs field “gives mass” to the fundamental particles of the standard model. We can write their masses in terms of the vacuum expectation value (vev) of the field () as , where is the particle’s dimensionless Yukawa parameter. As with vacuum energy, quantum corrections to the bare Higgs vev are predicted to be of the same order as the scale up to which we trust the theory. The observed value of is unnaturally small.
Similarly small changes to significantly affect how particles interact and bind. Damour & Donoghue (2008) refine the approach of Agrawal et al. (1998) by considering nuclear binding, and conclude that unless hydrogen is unstable to the reaction (if is too small) or else there is no nuclear binding at all (if is too large).
Similarly, the strengths of the fundamental forces are subject to anthropic constraints. For example, unless 141 MeV, the electromagnetic contribution to the mass of the proton causes it to be heavier than the neutron, making the proton unstable (Hogan, 2000; Hall & Nomura, 2008). If the strong force were a few percent weaker, the deuteron would be unbound (Pochet et al., 1991). The first step in stellar burning would require a three-body reaction to form helium-3. This requires such extreme temperatures and densities that stable stars cannot form: anything big enough to burn is too big to be stable (Barnes & Lewis, 2017)222The fine-tuning required for stable, life-powering stars has been clarified by recent work by Adams (2008); Barnes (2015); Adams (2016); Adams & Grohs (2016, 2017).. Weaken the strong force by a few more percent, or increase the strength of electromagnetism, and carbon and all larger elements are unstable (Barrow & Tipler, 1986). The parameters of the standard model must walk a tight-rope in order to form stable nuclei and support stable stars.
3.3 The Dimensionality of Spacetime
Spacetime is the arena in which physics takes place. At the length scales relevant to nuclei, atoms, stars, and the observable universe, spacetime is described by three dimensions of space and one of time. It is often straightforward to write down our familiar laws of nature in any number of dimensions. For example, in time dimensions () and space dimensions (), the wave equation is,
(2) |
for the scalar wave variable , and wave speed .
Given that we can theoretically explore such universes, what would they be like? This question has been addressed by Ehrenfest (1917), Whitrow (1955), Barrow & Tipler (1986), and Tegmark (1997). It has been known for some time that Newtonian gravity only predicts stable planetary orbits in three space dimensions (Bertrand’s theorem). With four space dimensions, for example, slightly non-circular orbits are spiralled, not elliptical — they would send the planet into the star or off into empty space. The same applies to atomic orbits described by the Schrodinger equation — there is no stable ground state.
We can also vary the number of time dimensions. In such a universe, an observer will have their own clock that measures time along their worldline; but what would they experience? Tegmark (1997) notes that linear partial differential equations, of which the wave equation is one example and by which many known laws can be approximated locally, have interesting properties when there is more than one time dimension. In our universe, we can approximately predict the behavior of a physical system into the future on the basis of knowledge of our immediate environment. (I don’t necessarily mean predict in a mathematical sense. A bird “predicts” the path of a flying insect to catch it.) But if there were more (or less) than one time dimension, then the problem would be mathematically ill-posed, being infinitely sensitive to the initial conditions. The behaviour of one’s environment could not be predicted using only local, finite accuracy data, making storing and processing information impossible.
4 The Multiverse
Fine-tuning in physics serves as impetus to search for a better theory, one which can account for the facts in a more natural way, without unmotivated assumptions. But what could naturally explain a life-permitting universe?
Perhaps we won the cosmic lottery: a life-permitting universe exists, despite the seemingly overwhelming odds, because the universe as a whole consists of a vast, variegated ensemble of sub-universes — a multiverse.
A viable multiverse model needs a few ingredients. The first is a physical theory that goes beyond the standard models by promoting the constants of nature and initial conditions to dynamic variables. We have some hints about how to do this. The strengths of the fundamental forces of particle physics are a function of energy, and seem to converge at an energy far above our current experiments. This has led to the development of Grand Unified Theories (GUT), in which the strong nuclear force, weak nuclear force, and electromagnetism are manifestations of a single, unified force (see Raby, 2010). At low energy, the greater symmetry of the unified field is spontaneously broken: the strengths of the forces are not written in stone in the fundamental equations, but rather are a frozen accident.
There are other ways to promote the constants to variables. In string theory, there is a landscape of solutions to the fundamental equations, with the familiar “constants” of physics written into the various folds and holes of the extra, compactified spatial dimensions (Schellekens, 2013). They become free parameters of the solution to the equations, rather than appearing in the equations themselves.
The second ingredient of a multiverse theory is a cosmological mechanism to create domains of the universe with different values of the “constants”. The leading contender today is cosmic inflation: in its earliest moments, the universe expanded at an accelerating rate, driving it towards critical density and laying down the seeds of cosmic structure.
The successful predictions of inflation require only that our observable universe inflated, but it has been argued that inflation will naturally produce a multiverse (see Linde, 2015). Most inflationary models posit a form of energy called an inflaton field that drives the expansion of the universe. The physics of a quantum field is codified in its potential: the dynamics is analogous to a ball rolling on a hill, and the shape of the hill tells us how the motion of the ball depends on the value of the field. For an inflaton field to cause accelerating expansion, it must be rolling slowly on a very flat section of the potential. Inflation ends when the field rolls off the flat section, usually into a valley. As the field oscillates around the bottom of the valley, reheating begins: the energy in the inflaton field is transferred into ordinary matter and radiation, beginning the hot big bang phase.
But the field is a quantum field, and so will not evolve deterministically (depending on your interpretation of quantum mechanics). Somewhat simplistically, consider an inflating region of the universe, in which the inflaton field dominates the energy of the universe and is slowly rolling. While in most of the region the field will roll into the valley and inflation will end, there is a finite probability that the field in some sub-region will evolve to a state further up the slope. This part of the universe will inflate for longer. Because this sub-region keeps growing in size, it will soon be larger than the original region, and so inflation will always continue somewhere. Given a sufficiently large initial inflating region, post-big-bang pockets form in an inflating background.
If the energy scale of reheating is above the symmetry breaking scale of the fields in the universe, then the symmetry will break differently in different sub-universes. This creates a population of sub-universes with different ‘constants’ and big bang ‘initial’ conditions.
The final ingredient is a selection effect (Wall & Jenkins, 2003; Bostrom, 2002; Neal, 2006). Consider the prediction of cosmic microwave background (CMB) anisotropies in the standard big bang model, from which cosmologists infer the values of various cosmic parameters. Like any thermodynamic system, there are fluctuations in the recombining plasma. So, in a sufficiently large universe, the probability of someone observing the CMB that we see approaches one regardless of the values of the cosmic parameters. If we tested physical models by calculating the probability that some observation in the universe matches our actual observations, then any values of the cosmic parameters would do in an infinite universe. We couldn’t infer their values from observations. A multiverse would make this problem even worse.
To resolve this problem, remember that we don’t just know that some observation has taken place, but that a particular observer has made an observation. Even if some observer sees a misleading CMB, the vast majority won’t, justifying our inference. We apply this to the multiverse: that some region of the universe permits life is a good start but not sufficient. What will a typical observer see?
The anthropic prediction by Weinberg (1987) of the cosmological constant provides an excellent test case. Given a large enough variety of sub-universes with different values of the cosmological constant, somewhere will have a value that permits structure to form. In such an ensemble, asks Weinberg, what cosmological constant would a typical observer see? There is nothing in fundamental physics as we know it that singles out as a privileged value, so we assume for the moment that for values of much smaller than the ‘natural’ Planck scale, the multiverse produces a roughly uniform distribution of values. Then (considering positive values of for now) what is the largest value of that permits the formation of structure? Weinberg’s analytic calculation gives an upper limit of , where is the present cosmic mass density. Weinberg made this prediction before observation showed that .
A typical observer would expect to observe a vacuum energy roughly comparable with the anthropic upper bound. It can’t be larger, of course — there are no observers in those sub-universes to make an observation. Weinberg’s calculation gives the upper bound as being two orders of magnitude above the actual value, which is close enough to take the calculation seriously. My colleagues and I are currently repeating Weinberg’s calculation with more sophisticated supercomputer models of galaxy formation.
If we had observed a value that was ten orders of magnitude smaller than the upper bound, then we would conclude that one of the assumptions in our model is probably wrong. We would look for a dynamical or symmetry-based explanation, rather than an anthropic one.
This kind of case for the multiverse has been criticized as speculative and untestable. But it should be remembered that such considerations are almost unavoidable in cosmology. Just as the astronomer must understand their telescope before they can understand what they see through it, when a cosmologist models the universe, they are inevitably modeling a system that contains themselves. We cannot pretend to stand outside the universe. Selection effects cannot be ignored. We are not Dr Frankenstein; we are the monster. We have woken up in a laboratory and are trying to understand how it made us.
We can test the multiverse using Bayesian probability theory. In this case, the “data” to be explained is the constants of nature. If the fine-tuning for life implies that almost all observers in the multiverse would observe similar constants to what we observe, then this could provide a major advantage for a multiverse hypothesis over theories in which the constants are free parameters (Aguirre, 2007; Barnes, 2017).
One way in which a multiverse theory can fail spectacularly is known as the Boltzmann Brain problem. Physical theories predict observations, and so a multiverse model should — in principle — be able to predict what kind of observer we would expect to be. One striking feature of our status as observers is that we formed through a long, consistently entropy-increasing process: gravitational collapse into galaxies and stars, stellar burning and supernovae, planet formation, and biological evolution. In some multiverses, including Boltzmann’s original multiverse (Boltzmann, 1895), most observers form via a chance statistical fluctuation. Without a consistent thermodynamic arrow of time, they will not observe records of the processes that formed them (Hartle, 2004). They will observe as much free energy around them as is required for their existence as observers, and almost certainly no more.
To be clear, this is not the philosophical “brain-in-a-vat” problem: how can I know whether I’m a Boltzmann brain with false memories? This is a more straightforward “theoretical prediction meets observation” scenario: a cosmological theory predicts that a typical observer will be a Boltzmann brain, and will observe that they are a Boltzmann brain. And that prediction is wrong. Whether multiverse models can naturally avoid this problem is an open question; see, among many others, Page (2006); Linde (2007); Banks (2007); de Simone et al. (2010); Aguirre, Carroll & Johnson (2011); Nomura (2011); Boddy & Carroll (2013); Albrecht (2015); Boddy, Carroll & Pollack (2015).
Misgivings about the whole multiverse project are hardly surprising. Are the tests of multiverse theories enough to make it scientific? Unobservable sub-universes are very different to unobservable quarks: we can constrain the properties of quarks via experiment, but every other sub-universe in the multiverse could disappear tomorrow and we would never know. The meagre tests of the multiverse “prove nothing”, say Ellis & Silk (2014), ”Fundamentally, the multiverse explanation relies on string theory, which is as yet unverified, and on speculative mechanisms for realizing different physics in different sister universes. It is not, in our opinion, robust, let alone testable.”
A potentially tricky hurdle is the measure problem, about which there is an extensive literature. Many multiverse theories imply or assume that there are an infinite number of other sub-universes. Given a finite population, deriving probabilities is straightforward: what fraction of observers see a value of as small as the one we observe? But in an infinite multiverse, we cannot simply count sub-universes.
In particular, once we have a useful definition of an observer, it seems that we should treat them all on equal footing. Think of this as permutation symmetry — having arbitrarily numbered all the observers (or observer moments), we should be able to shuffle the labels without changing the prediction of the model. But there is no assignment of probabilities to an infinite number of possibilities that respects this symmetry. This is often taken as incentive to assign different probabilities. But it could be argued, and with considerable force, that this means that an infinite multiverse theory cannot justify probabilities and so cannot make predictions. “In an infinite universe,” says Olum (2012), “everything which can happen will happen an infinite number of times, so what does it mean to say that one thing is more likely than another?”. These are open questions; see, among many others, Vilenkin (1995); Garriga et al. (2006); Aguirre, Gratton, & Johnson (2007); Vilenkin (2007a, b); Gibbons & Turok (2008); Page (2008); Bousso, Freivogel, & Yang (2009); de Simone et al. (2010); Freivogel (2011); Bousso & Susskind (2012); Garriga & Vilenkin (2013); Carroll (2017); Page (2017).
5 After Physics
In physics, fine-tuning problems afflict theories that seem to be successful, that is, that can account for the data. The problem is not a falsified prediction, as one might expect from a discounted or discarded theory. Recall the lesson of Ptolemy’s model. Within the set of possible geocentric planetary systems, an uncomfortably large proportion look very different to our Solar System. A fine-tuning problem is raised by a large set of alternate possibilities. This suggests an interesting thought experiment.
Suppose there is an ultimate theory of physics. At a future International Meeting of Really Important Physicists, Alberta Einstein walks to the chalkboard, scribbles a few equations, and fundamental physics comes to an end. Like chess pieces who had discovered the laws of chess, no deeper rules exist.
By hypothesis, this theory would be consistent with all scientific data. But we may still glimpse a large set of alternate possibilities, and so a kind of fine-tuning problem remains. Even if it contains no free parameters, Alberta’s chalkboard will show one particular mathematical equation or structure. We will be faced by a very old question: why this universe? Of all the ways the world could have been, why this way? Of all the mathematically consistent chalkboards of equations, why Alberta’s?
Obviously, the answer is not yet-another chalkboard of equations. Neither is it more observations of this universe. This is not the kind of question that physics can answer, because we can’t prove from any set of equations that they describe reality. Theories don’t predict their own success. But if not physics, then what? What do we do when fundamental physics is over?
Perhaps we stop asking questions. Maybe reality doesn’t have any ultimate reason for why it is the way it is. Explanations of the physical world reach the ultimate laws, and stop. This is the supposition of naturalism: the natural world is all there is. For a modern defence, see Carroll (2016).
Alternatively, Tegmark (1998) has defended the “the ultimate ensemble theory”, that “physical existence is equivalent to mathematical existence”. The actual world is not chosen from a set of mathematical possibilities; rather, all mathematical possibilities are equally real, and we are self-aware substructures (SASs) within a particular mathematical structure. A metaphysician might worry about the dissolution of the line between abstract and concrete. The physicist who tries to test Tegmark’s idea via its prediction that “the mathematical structure describing our world is the most generic one that is consistent with our observations” faces a problem: we need a probability distribution over the set of mathematical structures, but a probability distribution is itself a mathematical structure. Tegmark says that probabilities are “merely subjective”, but our subjective states of mind are mathematical substructures, too.
By contrast, axiarchism (Leslie, 1989) and theism (e.g. Swinburne, 2004; Collins, 2009) argue that beneath the mathematical structure of our universe is a reason: our universe is morally valuable, particularly its embodied, free, conscious agents. Just as Tegmark promotes possibilities to reality on mathematical grounds, axiarchism does so on moral grounds: the world exists because it is good. Theism proposes that God exists necessarily in some sense, and the physical world is the result of God’s free choice to create a morally valuable world.
For each of these alternatives, the fine-tuning of the universe for life plays an important role. For Tegmark, the complexity required by any SAS explains why we see this universe/mathematical structure, rather than a simpler one. For axiarchism and theism, fine-tuning for life shows how these ideas could have explanatory power. Given the seemingly extraordinarily small proportion of possibilities that permit the existence of embodied moral agents, the axiarchist and theist can understand something of why Alberta’s blackboard is the one has gone to all the bother of existing. Further examination of these alternatives takes us beyond the philosophy of physics.
Acknowledgments
Supported by a grant from the John Templeton Foundation. This publication was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the John Templeton Foundation.
References
- Adams (2008) Adams, F. C. [2008]: ‘Stars in other universes: stellar structure with different fundamental constants’, Journal of Cosmology and Astroparticle Physics, 8, 010.
- Adams (2016) Adams, F. C. [2016]: ‘Constraints on alternate universes: stars and habitable planets with different fundamental constants’, Journal of Cosmology and Astroparticle Physics, 2, 042.
- Adams & Grohs (2016) Adams, F. C. and Grohs, E. [2016], ‘On the Habitability of Universes without Stable Deuterium’, arXiv:1612.04741.
- Adams & Grohs (2017) Adams, F. C. and Grohs, E. [2017]: ‘Stellar helium burning in other universes: A solution to the triple alpha fine-tuning problem’, Astroparticle Physics, 87, 40.
- Agrawal et al. (1998) Agrawal, V, S. M. Barr, John F. Donoghue, and D Seckel. [1998]: ‘Anthropic Considerations in Multiple-Domain Theories and the Scale of Electroweak Symmetry Breaking’, Physical Review Letters, 80, pp. 1822–25.
- Aguirre (2007) Aguirre, A. [2007]: ‘The inflationary multiverse’, in Carr, B. J. (ed.), Universe or Multiverse?, Cambridge: Cambridge University Press.
- Aguirre, Gratton, & Johnson (2007) Aguirre A., Gratton S., Johnson M. C. [2007]: ‘Hurdles for recent measures in eternal inflation’, Physical Review D, 75, 123501
- Aguirre, Carroll & Johnson (2011) Aguirre, Anthony, Sean M. Carroll, and Matthew C. Johnson [2011]: ‘Out of Equilibrium: Understanding Cosmological Evolution to Lower-Entropy States’, arXiv preprint: 1108.0417.
- Albrecht (2015) Albrecht, A. [2015]: ‘Tuning, Ergodicity, Equilibrium, and Cosmology’, Physical Review D, 91, pp. 1–11.
- Banks (2007) Banks, T. A. [2007]: ‘Entropy and Initial Conditions in Cosmology’, arXiv preprint: hep-th/0701146.
- Barnes (2012) Barnes, L. A. [2012]: ‘The Fine-Tuning of the Universe for Intelligent Life’, Publications of the Astronomical Society of Australia, 29, pp. 529–564.
- Barnes (2015) Barnes, L. A. [2015]: ‘Binding the diproton in stars: anthropic limits on the strength of gravity’, Journal of Cosmology and Astroparticle Physics, 12, 050.
- Barnes (2017) Barnes, L. A. [2017]: ‘Testing the multiverse: Bayes, fine-tuning and typicality’, in Chamcham et al. (ed.), The Philosophy of Cosmology, forthcoming with Cambridge University Press.
- Barnes & Lewis (2017) Barnes, L. A. and Lewis, G. F. [2017]: ‘Producing the deuteron in stars: anthropic limits on fundamental constants’, Journal of Cosmology and Astroparticle Physics, 07, 036.
- Barrow & Tipler (1986) Barrow, J. D. and F. J. Tipler [1986]: The Anthropic Cosmological Principle, Oxford: Clarendon Press.
- Boltzmann (1895) Boltzmann, Ludwig [1895]: ‘On Certain Questions of the Theory of Gases’, Nature, 51, 413–15
- Boddy & Carroll (2013) Boddy, Kimberly K, and Sean M. Carroll [2013]: ‘Can the Higgs Boson Save Us From the Menace of the Boltzmann Brains?’, arXiv preprint: 1308.4686.
- Boddy, Carroll & Pollack (2015) Boddy, Kimberly K, Sean M. Carroll, and Jason Pollack [2015]: ‘Why Boltzmann Brains Don’t Fluctuate Into Existence From the De Sitter Vacuum’, arXiv preprint: 1505.02780.
- Bostrom (2002) Bostrom, N. [2002]: Anthropic Bias: Observation Selection Effects in Science and Philosophy, New York: Routledge.
- Bousso, Freivogel, & Yang (2009) Bousso R., Freivogel B. and Yang I.-S. [2009]: ‘Properties of the scale factor measure’, Physical Review D, 79, 063513
- Bousso & Susskind (2012) Bousso, R., & Susskind, L. [2012]: ‘Multiverse interpretation of quantum mechanics’, Physical Review D, 85, 045007
- Burgess & Moore (2006) Burgess C. and G. Moore [2006]: The Standard Model: A Primer, Cambridge: Cambridge University Press.
- Carr & Rees (1979) Carr, B. J. and M. J. Rees [1979]: ‘The Anthropic Principle and the Structure of the Physical World’, Nature, 278, pp. 605–612.
- Carroll (2016) Carroll, S. M. [2016]: The Big Picture: On the Origins of Life, Meaning, and the Universe Itself, New York: Dutton.
- Carroll (2017) Carroll, S. M. [2017]: ‘Why Boltzmann Brains Are Bad’, arXiv preprint: 1702.00850
- Carter (1974) Carter, B. [1974]: ‘Large Number Coincidences and the Anthropic Principle in Cosmology’, in M. S. Longair (ed.), Confrontation of Cosmological Theories with Observational Data, Dordrecht: D. Reidel, pp. 291–298.
- Caticha (2009) Caticha, A. [2009]: ‘Quantifying Rational Belief’, AIP Conference Proceedings, 1193, pp. 60-8.
- Collins (2009) Collins, R. [2009]: ‘The teleological argument: an exploration of the fine-tuning of the universe’, in W. L. Craig and J. P. Moreland (ed.), The Blackwell Companion to Natural Theology, Oxford: Blackwell Publishing.
- Cox (1946) Cox, R. T. [1946]: ‘Probability, frequency and reasonable expectation’, American Journal of Physics, 17, pp. 1–13.
- Damour & Donoghue (2008) Damour, Thibault, and John F. Donoghue [2008]: ‘Constraints on the Variability of Quark Masses from Nuclear Binding’, Physical Review D, 78, 014014.
- Davies (1983) Davies, P. C. W. [1983]: ‘The Anthropic Principle’, Progress in Particle and Nuclear Physics, 10, pp. 1–38.
- de Simone et al. (2010) de Simone, A., Guth, A. H., Linde, A., Noorbala, M., Salem, M. P., & Vilenkin, A. [2010]: ‘Boltzmann brains and the scale-factor cutoff measure of the multiverse’, Physical Review D, 82, 063520
- Dine (2015) Dine, M. [2015]: ‘Naturalness Under Stress’, arXiv: 1501.01035.
- Donoghue (2007) Donoghue, J. F. [2007]: ‘The fine-tuning problems of particle physics and anthropic mechanisms’, in Carr, B. J. (ed.), Universe or Multiverse?, Cambridge: Cambridge University Press.
- Ehrenfest (1917) Ehrenfest P. [1917]: ‘Can atoms or planets exist in higher dimensions?’, Proceedings of the Amsterdam Academy, 20, 200.
- Ellis & Silk (2014) Ellis, G.F.R. and Silk, J. [2014]: ‘Scientific method: Defend the integrity of physics’, Nature, 516, pp. 321–323.
- Fowlie (2014) Fowlie A. [2014]: ‘CMSSM, naturalness and the ”fine-tuning price” of the Very Large Hadron Collider’, Physical Review D, 90, 015010.
- Freivogel (2011) Freivogel B. [2011]: ‘Making predictions in the multiverse’, Classical and Quantum Gravity, 28, 204007
- Garriga et al. (2006) Garriga, J., Schwartz-Perlov, D., Vilenkin, A., & Winitzki, S. [2006]: ‘Probabilities in the inflationary multiverse’, Journal of Cosmology and Astroparticle Physics, 1, 017
- Garriga & Vilenkin (2013) Garriga J., Vilenkin A. [2013]: ‘Watchers of the multiverse’, Journal of Cosmology and Astroparticle Physics, 5, 037
- Gibbons & Turok (2008) Gibbons G. W., Turok N. [2008]: ‘Measure problem in cosmology’, Physical Review D, 77, 063516
- Hall & Nomura (2008) Hall L. and Nomura Y. [2008]: ‘ Evidence for the multiverse in the standard model and beyond’, Physical Review D, 78, 035001.
- Hartle (2004) Hartle, J. B. [2008]: ‘The Physics of Now’, arXiv preprint: gr-qc/0403001
- Hogan (2000) Hogan, C. J. [2000]: ‘Why the Universe Is Just so’, Reviews of Modern Physics, 72, pp. 1149–1161.
- Jaynes (2003) Jaynes, E. T. [2003]: Probability Theory: the Logic of Science, Cambridge: Cambridge University Press.
- Knuth & Skilling (2012) Knuth, K. H. and J. Skilling [2012]: ‘Foundations of Inference’, Axioms, 1 (1), pp. 38-73.
- Kolmogorov (1933) Kolmogorov, A. [1933]: Foundations of the Theory of Probability, Berlin: Julius Springer.
- Leslie (1989) Leslie J. [1989]: Universes. London: Routledge.
- Lewis & Barnes (2016) Lewis, G. F. and L. A. Barnes [2016]: A Fortunate Universe: Life in a Finely Tuned Cosmos, Cambridge: Cambridge University Press.
- Linde (2007) Linde, A. [2007]: ‘Sinks in the Landscape, Boltzmann Brains and the Cosmological Constant Problem’, Journal of Cosmology and Astroparticle Physics, 01, 22.
- Linde (2015) Linde, A. [2015]: ‘A Brief History of the Multiverse’, arXiv: 1512.01203.
- Neal (2006) Neal R. M. [2006]: ‘Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning’, arXiv: math/0608592
- Nomura (2011) Nomura, Y. [2011]: ‘Physical Theories, Eternal Inflation, and the Quantum Universe’, Journal of High Energy Physics, 11, 63.
- Olum (2012) Olum, Ken D. [2012]: ‘Is There Any Coherent Measure for Eternal Inflation?’, Physical Review D, 86, 1–6.
- Page (2006) Page, D. N. [2006]: ‘Susskind’s Challenge to the Hartle-Hawking No-Boundary Proposal and Possible Resolutions’, arXiv preprint: hep-th/0610199.
- Page (2008) Page D. N. [2008]: ‘Cosmological measures without volume weighting’, Journal of Cosmology and Astroparticle Physics, 10, 025
- Page (2017) Page D. N. [2017]: ‘Bayes Keeps Boltzmann Brains at Bay’, arXiv preprint: 1708.00449
- Peacock (1998) Peacock, J. A. [1998]: Cosmological Physics, Cambridge: Cambridge University Press.
- Pochet et al. (1991) Pochet, T., J. M. Pearson, G. Beaudet, and H. Reeves [1991]: ‘The Binding of Light Nuclei, and the Anthropic Principle’, Astronomy and Astrophysics, 243, pp. 1–4.
- Raby (2010) Raby, S. [2010]: ‘Grand Unified Theories’, in Review of Particle Physics, Nakamura, K. et al. Physical Review DJ. Phys. G37: 075021.
- Ramsey (1926) Ramsey, Frank P., [1926]:‘Truth and Probability’, in The Foundations of Mathematics and other Logical Essays, R.B. Braithwaite (ed.), London: Kegan, Paul, Trench, Trubner & Co., pp. 156–-198.
- Schellekens (2013) Schellekens, A. N. [2013]: ‘Life at the Interface of Particle Physics and String Theory’, Reviews of Modern Physics, 85, pp. 1491–1540.
- Silk (1977) Silk, J. [1977]: ‘Cosmogony and the Magnitude of the Dimensionless Gravitational Coupling Constant’, Nature, 265, pp. 710–711.
- Skilling (2014) Skilling, J. [2014]: ‘Foundations and Algorithms’, in Hobson, M. et al (eds.), Bayesian Methods in Cosmology, Cambridge: Cambridge University Press.
- Swinburne (2004) Swinburne, R. [2004]: The Existence of God, Oxford: Clarendon Press.
- Tegmark (1997) Tegmark M. [1997]: ‘On the dimensionality of spacetime’, Classical and Quantum Gravity, 14, L69.
- Tegmark (1998) Tegmark M. [1998]: ‘Is “the Theory of Everything” Merely the Ultimate Ensemble Theory?’, Annals of Physics, 270, 1, pp. 1–51.
- Vilenkin (1995) Vilenkin, A. [1995]: ‘Making predictions in an eternally inflating universe’, Physical Review D, 52, 3365
- Vilenkin (2007a) Vilenkin A. [2007]: ‘Freak observers and the measure of the multiverse’, Journal of High Energy Physics, 1, 092
- Vilenkin (2007b) Vilenkin A. [2007]: ‘A measure of the multiverse’, Journal of Physics A, 40, 6777.
- Wall & Jenkins (2003) Wall, J. V. and Jenkins, C. R. [2003]: Practical Statistics for Astronomers, Cambridge: Cambridge University Press.
- Weinberg (1987) Weinberg, S. [1987]: ‘Anthropic Bound on the Cosmological Constant’, Physical Review Letters, 59, pp. 2607–10.
- Whitrow (1955) Whitrow G. J. [1955]: ‘Why physical space has three dimensions’, The British Journal for the Philosophy of Science, VI, 13.