In Chapter 2, we discussed the cosmological principle which asserts that the Universe is spatially homogeneous and isotropic. We then wrote down the metric (23) for an expanding, homogeneous and isotropic Universe rather like our own.
However it turns out that equation (23) is just one of several possibilities for metrics satisfying the cosmological principle. The effects of our overly restrictive choice trickle through to Equation (109) which directly links the Universe’s density to its rate of expansion . All experience of physics would lead us to expect only the acceleration to be determined by . As we will now see, this weirdness reflects that there is actually a term missing from equation (109).
We start by taking equation (24) and re-express it in polar coordinates. This doesn’t change anything physical, and leads to the expression:
(116) |
where is the metric on the -sphere.
Now we want to change this metric to be more general. The ‘cosmological principle’ tells us that it must remain isotropic, i.e. spherically symmetric, but it doesn’t immediately tell us anything about what is going on in the radial direction. So we should allow the radial part of the metric to scale as a function of , giving us the generalised form
(117) |
where is a convenient way of representing an unknown positive function to be determined88 8 The point is just that using leads to neater algebra than writing a free function in a more obvious way like ..
How do we pin down the properties of ? First, we isolate only the spatial part of the metric so that we can calculate its properties without worrying about time. We call the metric of the spatial part , and is then just the diagonal matrix
(118) |
Next, recall that the Ricci tensor is an intrinsic property of a space. (So too is the Riemann tensor but it is more convenient to work with the Ricci tensor, which has only two indices.) It must have no preferred directions, i.e. in local cartesian coordinates it must be proportional to the identity matrix. So, if we calculate the Ricci tensor of the -space represented by the metric we will find that in such coordinates it is proportional to . (The superscript in just reminds us we are calculating for the 3-space, not the full 4-dimensional spacetime.) The coordinate-independent generalisation of this statement is that ; the constant of proportionality is chosen as (with the being down to convention):
(119) |
To continue the calculation we need explicit expressions for the Ricci tensor, given by applying equation (105) to . The non-vanishing components turn out to be:
(120a) | |||||
(120b) | |||||
(120c) |
Plugging into the maximal symmetry requirement specified by (119), we find:
(121) |
Putting together the results above, we have found that the most general metric on a spatial slice is:
(122) |
Here, is used to indicate that we are still just talking about the spatial distances – no time, or scaling with time is yet included.
The value of sets the curvature of the spatial surfaces. The geometry is classified as:
constant negative curvature on (“open”) | |||||
no curvature on (“flat”) | |||||
(123) |
These are, respectively, three-dimensional analogues of two-dimensional saddles, flat planes and spheres. In a saddle-like geometry, parallel lines diverge; on flat planes they stay a constant distance apart; on spheres they converge.
The terminology “open” sounds like there is an infinite amount of space. That’s a bit misleading – the case with possibly describes an infinite open space but could also describe a non-simply-connected compact space, so calling it “open” is technically an over-interpretation. The same “possibly infinite” property is true of the flat universe, ; that case is just the spherical polar coordinates version of the original flat FRW metric, equation (23).
Before moving on, we should note that it is sometimes more computationally convenient to redefine the radial coordinate:
(124) |
Integrating,
(125) |
where
(126) |
such that
(127) |
Recall that the spacetime metric describes one of our maximally symmetric spaces evolving in size; putting together equation (117) with expression (122) we have:
(128) |
where we have used as the scalefactor instead of for reasons which will become clear momentarily. There is a scaling freedom in the choice of coordinate , which is to say that under the transformation for constant , the form of the metric is preserved but and get rescaled. You will sometimes see this freedom used to set to one of , or for the open, flat and closed cases – but in our case we will instead use it to normalise the scalefactor today such that . (You can’t do both simultaneously, in general.)
When using this particular normalisation, the scalefactor is normally referred to as rather than . You can think of it as being , i.e. a physical scalefactor divided by its physical scale today . In this choice of normalization, has, very sensibly, dimensions of a length and consequently is written just as . Similarly the curvature has dimensions of an inverse length squared (and is sometimes written ). At the risk of being a bit repetitive, here is the metric with those conventions baked in:
(129) |
It doesn’t really look any different, but by convention using for the scalefactor is taken to guarantee that at the present day. Setting , the non-zero Christoffel symbols are:
(130) |
or related to these by symmetry. Non-zero components of the Ricci tensor are:
(131a) | |||||
(131b) | |||||
(131c) | |||||
(131d) |
and the Ricci scalar is
(132) |
With our newly generalised FRW metric, the first Friedmann equation (109) becomes:
(133) |
and the second Friedmann equation does not change, so I’ll repeat it here for completeness:
(134) |
☞ Exercise 5D
Check that these two equations follow from the Ricci tensor components and scalar given above.
Earlier I mentioned the weirdness of having an equation like (109) linking density directly to speed of expansion. With the updated version, equation (133), that weirdness is completely gone – the rate of expansion depends both on the density and on the curvature . In a Newtonian picture we’d think of this as a “constant of integration” arising when integrating an equation for acceleration. But from a relativistic perspective, we can find it by purely geometrical means.
In an expanding universe (i.e. at all times) filled with ordinary matter (i.e. satisfying the strong energy condition: ), Eq. (134) implies at all times. This indicates the existence of a singularity in the finite past: the ‘big bang’.
That said, the conclusion relies on the assumption that general relativity (and so the Friedmann equations) are applicable up to arbitrarily high energies. This assumption is almost certainly not true and it is expected that a quantum theory of gravity will remove the initial big bang singularity. Even today, theories like inflation – that we will discuss soon – dramatically alter the classical big bang picture, by violating the strong energy condition.
The expansion rate of the FRW universe is characterized by the Hubble parameter,
(135) |
The expansion rate at the present epoch, , is called the Hubble constant, : the “” subscript is used to denote the present epoch: , . Often you will see the dimensionless number – not to be confused with Planck’s constant – where
(136) |
The astronomical length scale of a megaparsec (Mpc) is equal to cm. Observationally, . On purely dimensional grounds we can expect that typical cosmological scales will be set by the Hubble length,
(137) |
Similarly the Hubble time is
(138) |
Since we usually set , is referred to as both the Hubble length and the Hubble time.
The density parameter, which counts the energy density from all constituents of the universe, is defined as
(139) |
where the critical density
(140) |
changes with time. The origin of the term ‘critical’ lies in rewriting the Friedmann equation (133):
(141) |
The type of curvature, is therefore defined by :
So, the density parameter tells us which of the three FRW geometries describes our universe. At present, our universe appears to be flat.
We can further streamline our expressions by treating the contribution of the spatial curvature as a fictitious energy density99 9 The ability to pretend that curvature is some weird form of energy comes from a separability of the Einstein tensor that is not obvious; that is, we can rewrite and the two parts of the tensor individually satisfy the normal conservation equation .,
(142) |
with a corresponding density parameter,
(143) |
leading to the ridiculously compact version of the Friedmann equation
(144) |
Occasionally we will refer to the deceleration parameter, which is defined by convention as
(145) |
The definition of looks slightly arbitrary but there is method in the madness:
It is proportional to the deceleration of the expansion measured by ;
Because it is normalised by on the denominator, has no time dimensions;
Because of the extra on the numerator, has no length dimensions even if does (recall the dimensions of depend on the choice of normalisation; in our convention is dimensionless, but it didn’t have to be so).
Therefore the value of does not depend either on conventions or units.
An immediate consequence of the two Friedmann equations is the continuity equation, which we previously derived in Eq. (96) by considering conservation of energy-momentum. Inserting the Christoffel symbols for the general FRW metric shows that the derivation is unchanged by the generalisation that we have made in Section 1, so we still have (using our newly-defined Hubble parameter ):
(146) |
Recall this encodes the first law of thermodynamics. The continuity equation (96) can be integrated for a general fluid with constant equation of state parameter (see equation 90) to give
(147) |
Supposing and the Universe contains just one fluid, the Friedmann equation (133) combined with equation (147) leads to the time evolution of the scale factor,
(148) |
non-relativistic matter | |||
radiation/relativistic matter | |||
cosmological constant | constant | ||
curvature (density defined by eq. 142) |
There are four commonly-encountered equation-of-state parameters for the various contents of the universe. The corresponding behaviours are summarised in Table 1. The first column describes the relation between density and pressure for a single type of fluid; the second column describes the resulting evolution of the fluid density with scalefactor . These two properties are intrinsic to a particular fluid because they follow directly from the continuity equation. On the other hand the final column has a different standing – it assumes that the only contents of the universe is the type of matter described (with the critical density ).
Obviously, the real universe actually contains a mixture of different things. So far as we can tell it is flat and contains radiation, matter, and a cosmological constant with (discussed more below). So the actual expansion history of our universe doesn’t follow any of the scalings listed in the final column of the above table. However, it approximately follows one of them at any given epoch. In the early universe (), the density of radiation must have overwhelmed the density of matter or cosmological constant (, ). We call this the radiation-domination epoch, and to an excellent approximation the scalefactor grows as . Later, the matter density becomes larger as the radiation redshifts away, and the universe enters a matter-dominated epoch (). Finally, right now the matter density has dropped far enough that the cosmological constant appears to be becoming dominant; so we are just entering an epoch in which the size of the universe will grow exponentially.
Let’s take a quantitative look at this history. It is hopefully clear from the discussion above that it depends on the precise densities of each component today. For each species we define the present ratio of the energy density relative to the critical density,
(149) |
and the corresponding equations of state
(150) |
This allows one to rewrite the first Friedmann equation (133) as
(151) |
which implies the following consistency relation
(152) |
which is a generalised version of Eq. (144). Incidentally, in these terms, it can be helpful to rewrite the second Friedmann equation (134) evaluated at as
(153) |
Measuring actual densities in the Universe is challenging. One of the few densities that we can measure almost directly is the radiation density. Most radiation is remnants from the Big Bang; it has not interacted with matter for almost the entire history of the universe. The photons in this radiation are known as the cosmic microwave background, and can be directly measured to have a thermal spectrum at temperature .
This observation is critical to estimating the radiation content of our Universe. On the other hand, we can’t just assume that all the ‘radiation’ comes in the form of photons. In terms of the table above, ‘radiation’ actually means anything moving relativistically with ; or to put it another way, particles that are sufficiently light to make their energies far exceed their rest-mass, .
Other than light itself, neutrinos are the only such particles that we know of1010 10 There could be other particles beyond the standard model of particle physics. So, any observational consequences arising from assuming “radiation = neutrinos + photons” can be viewed as tests for extra relativistic particle species.. The trouble is that they are exceptionally hard to observe1111 11 https://en.wikipedia.org/wiki/Neutrino_detector. We can’t measure their remnant spectrum like we can with the photons of the CMB, so we have to rely on some physics to work out their expected density.
The first expectation is that the temperature (and hence energy density) of the neutrinos should be similar to that of the photons. That’s because, in the early universe, when densities were sufficiently high that neutrinos actually interacted regularly with other matter, they would equilibriate and reach the same temperature. As the universe expands, even if they don’t any longer “talk” to each other, the neutrinos and photons cool at the same rate. The standard model of particle physics has three flavours of neutrino, so one expects there will overall be times as much radiation density as the CMB alone suggests.1212 12 Photons have two polarization states and neutrinos come paired with anti-neutrinos, which means the degeneracy factors should be the same. Remember neutrinos are chiral in the standard model, so there are no additional ‘spin’ degrees of freedom to count.
Actually, that’s an oversimplification because the relationship between temperature and energy density is different for neutrinos – they are fermions, so follow Fermi-Dirac statistics rather than Bose-Einstein statistics. You might be used to this making little difference for classical systems, but in the relativistic limits that we are operating it makes a slight difference. If (where is the radiation constant) for photons/bosons, it turns out that for neutrinos/fermions. So, perhaps ?
Sadly even our revised expectation isn’t quite right. That’s because the CMB gets an extra energy injection from annihilating electrons and positrons when the universe’s temperature drops below , corresponding to the electron rest mass . At this point, the neutrinos have already decoupled (at around )1313 13 The neutrino decoupling temperature is set by comparing the rate of Fermi interactions with the expansion rate of the universe and so it’s just a coincidence that the two temperature are reasonably close. If it’s not already clear, we should note that muons and taus are vastly heavier than electrons so annihilate much earlier in the history of the universe and contribute to both the neutrino and photon backgrounds. Only electrons/positrons annihilate late, and so generating a discrepancy between neutrino and photon temperatures. Consequently, the neutrino background is not quite as dense as the first guess above. It sounds like calculating the resulting correction must be hugely complicated but there’s actually a neat shortcut. Because the expansion of the universe occurs close to thermodynamic equilibrium, entropy must be conserved. We can use this conservation of entropy to relate the total energy before and after the positron-electron annihilation.
The reduction in neutrino energy density according to this argument turns out to be . (The weird-looking number turns up from the different statistics again. The exponent comes from the fact that entropy scales as , whereas energy density scales as .) So overall,
(154) |
where .
If all this seems like a lot of detail, don’t worry. I wanted to sketch where the numbers come from in case you need them one day – all you really need to remember for this course is that the cosmic microwave background is not the only relativistic source in the universe today, and accordingly the actual radiation density that we use in calculations is a bit larger than what you calculate from sticking the CMB temperature into .
The final density (evaluated at the present day) is almost completely negligible. However, as we discussed above it will have been dominant at sufficiently early times, so having this density estimate is essential.
The radiation density is one of the cleanest measurements in cosmology. But we can go much further and deduce what else is in our Universe by combining a variety of other observations. One famous observational result is that data imply the existence of a cosmological constant, , which just means a component of the density with in equation (150). We’ll examine the origin and meaning of this idea in Section 5, but first it is worth looking at what evidence we use to infer densities of matter and .
One approach is just to make a census of the Universe around us. If you know the rough number of galaxies per volume, and have an estimated mass for a typical galaxy, you can of course estimate the total matter density. In practice this is never going to be precise because the estimates are hard to perform – and, more fundamentally, because there is a lot of material beyond the confines of galaxies.
Instead much of our present knowledge of densities in cosmology comes from measuring either the luminosity distance to sources of known brightness, or the angular diameter distance to features of known extent. We will define these distances carefully in the next chapter, but they turn out to be functions of the Hubble and density parameters, as well as redshift of the object . Consequently any single measurement constrains only a combination of these parameters – just like solving a simultaneous equation, one needs multiple data at different to decouple and get values for each independent parameter.
With the luminosity distance, most famously measured by supernova surveys, we recover the local expansion rate and some combination of the density parameters , and (although at least we know that these three numbers add up to , Eq. (152), so technically one only needs to estimate two numbers). The 1998 observation that won the Nobel Prize for physics, but it did rely on existing observations (for example, based on the ‘count galaxies’ approach) that showed – otherwise the data would not have clearly ruled out the possibility that with a large negative curvature, .
Even today, supernovae don’t on their own give completely clean measurements of the individual parameters. On top of that these measurements only work if we know the intrinsic brightness of a given supernova. There are in fact significant uncertainties in that intrinsic brightness, so it is possible there are systematic errors in supernova-based measurements (though most people believe them to be under control and not a major cause for concern).
Measuring gravitational waves effectively tells us about the luminosity distance too, since the waves from any given source spread out and redshift just like light does. We believe the intrinsic ‘‘brightness’’ (i.e. strength of the gravitational wave emission) can be accurately calculated. Therefore LIGO and its successors offer an up-and-coming route to independent cosmological measurements using the luminosity distance approach1414 14 https://arxiv.org/abs/1710.05835.
What about the angular diameter distance? There aren’t really single objects of precisely known size in the Universe. However, we are rescued by the existence of “baryon acoustic oscillations” (BAO) – not single objects but frozen waves in the large scale structure, which we’ll study in depth later in the course. If you can measure these waves’ size on the sky, you are measuring a ratio of the intrinsic scale of the BAO to the angular diameter distance.
That ratio turns out to be an extremely complicated function of the different parameters. Still, by measuring the BAO in the cosmic microwave background radiation and then again in much more nearby large scale structure, one can partially decouple the different dependencies – like seeing the same ruler at two different distances, even if you don’t know exactly how long the ruler is. Combining BAO with CMB gives us strong evidence for (even ignoring all the data discussed above on supernovae).
That’s by no means the end of the information we can get from the CMB or from galaxy surveys. If we characterise the anisotropies more carefully rather than just summarise them with a single scale, the constraints on parameters become stronger. We’ll study the extra information this brings later in the course.
For now, it’s worth remembering that a large fraction of the information we have really does come from measuring combinations of the luminosity distance and angular diameter distance at a variety of redshifts. Generally speaking, the observations lock into a nice web of different constraints which support a consensus that the universe is flat () and composed of 5% atoms, 26% cold dark matter and 69% cosmological constant: , , , with .
Interestingly, in the last few years this consensus has started to break slightly. Having multiple sets of data that each constrain the makeup of our Universe in different ways gives one freedom to combine these data or to inspect them independently. Luminosity distance-based measures1515 15 https://arxiv.org/abs/1604.01424 of the expansion rate seemingly disagree with angular diameter distance-based measures1616 16 https://arxiv.org/abs/1502.01589, . Viewed from one angle, the disagreement is minor: who cares about precisely how fast the Universe expands? But, statistically speaking, the data is certainly in tension. There is considerable controversy amongst cosmologists about how to interpret this mismatch: it could be a fluke (unlikely measurements do happen), a systematic error in the data, or it could show a deficiency in the cosmological model that ties all the observations together. If it’s the last of those, we certainly should care because it might pointing to new physics that we don’t yet understand.
For now, though, let’s return to the default model assuming that any correction from new physics will be small.
The epoch of matter-radiation equality, when , has special significance for the generation of large scale structure and the development of CMB anisotropies because perturbations grow at different rates in the two different eras. As it turns out:
(155) |
The matter density, on the other hand, obeys
(156) |
By equating the two densities, we can solve for the scalefactor at matter-radiation equality:
(157) |
In terms of redshift,
(158) |
As increases at a fixed CMB temperature, equality is pushed back to higher redshifts and earlier times. It is very important that is at least a factor of a few larger than the redshift where photons decouple from matter, , so that the photons decouple when the universe is well into the matter-dominated era.
In the absence of gravity, only changes in energy from one state to another are measurable; the zero-point of the energy is arbitrary. However, in gravitation, where the curvature of spacetime couples directly to , the zero-point of the energy becomes observable. This opens up the possibility of vacuum energy: the energy density of empty space.
Vacuum energy cannot point in a preferred direction either in space or spacetime. After all, in a vacuum, there is nothing there in the spacetime to point. This implies that the associated energy-momentum tensor is Lorentz-invariant in locally inertial coordinates, . The equivalence principle immediately gives us the generalization to an arbitrary frame,
(159) |
Comparing to the perfect fluid energy-momentum tensor, , the vacuum looks like a perfect fluid with an isotropic pressure opposite in sign to the density:
(160) |
If we decompose the energy-momentum tensor into a matter piece plus a vacuum piece (159), the Einstein equation is:
(161) |
Putting this aside and coming at it from a different angle: Einstein was unsatisfied with his original field equations. They seemed to require that the universe either expanded or collapsed. To counteract this and obtain a static universe (see Exercise 6), he added a cosmological constant term, modifying the field equations to:
(162) |
This modification is exactly the same as adding a vacuum energy term to the energy-momentum tensor, with the identification:
(163) |
For that reason, the change to the Friedmann equations is easy to figure out (without having to rederive them from the modified field equations (162); let’s write them here for completeness:
(164) | ||||
(165) |
has dimensions of while has units of . So defines a scale (whereas GR is otherwise scale-free). What should be the value of ? There is no known way to precisely calculate this at present, but since the reduced Planck mass is
(166) |
and the reduced Planck length is
(167) |
one might guess that a theory of quantum gravity would require
(168) |
However, this is a dramatically bad guess compared to the observational measurement…
So, there are three conundrums associated with :
Why is so small?
Why is ?
Why is ?
The last is the so-called coincidence problem and is logically distinct from the first two because it is a statement only about the present epoch of the Universe. Note that
(169) |
In future, the vacuum density looks set to rapidly overwhelm the matter. In the past, the vacuum density was negligible. It is seemingly a big coincidence to find ourselves at the epoch where we can observe the transition.
In the present-day universe, as we discussed above, the radiation density is significantly lower than the matter density. But both the vacuum and matter are dynamically important. Referring back to the table in Section 2, as in the past, curvature and vacuum will be negligible and the universe will behave as though it has only matter content (until, at sufficiently early times, the radiation becomes important). As in the future, curvature and matter will be negligible.
That analysis is sound unless the scale factor never reaches because the universe begins to recollapse at some finite time (i.e. reaches zero then turns negative). Possible scenarios where this happens are:
: always decelerates and recollapses (if the universe gets too large, negative vacuum energy starts dominating and pulls it back together again).
: recollapse is possible if is sufficiently large that it halts the universal expansion before dominates.
To determine the dividing line between perpetual expansion and eventual recollapse, note that collapse requires to pass through as it changes from positive to negative:
(170) |
where is the scale-factor at turnaround. Dividing by , using , and rearranging, we obtain
(171) |
But what we really care about is not really but the range of given for which there is a real solution to (171). The range of for which the universe will expand forever is given by:
(172) |
This expression really does follow from (171), but I wouldn’t recommend attempting to derive it unless you’re very knowledgable about cubic equations (or you use a computer algebra package like Mathematica).
When ,
open and flat universes () will expand forever.
closed universes () will recollapse.
There is a “folk wisdom” that this correspondence is always true, but it is only true in the absence of vacuum energy. The relationship between spatial curvature and cosmic fate is illustrated in Fig. 8. The current cosmological data suggests the universe is flat to good accuracy, but no measurement is perfect so the uncertainty allows for either an open or closed universe depending on the sign of the observational errors. The data nonetheless strongly favours perpetual expansion (at least under the assumption that vacuum energy remains constant).
Let’s end this section by looking at static solutions to the Friedmann equations. To be static, we must have not only , but also .
☞ Exercise 5I
Verify that one can only get a static solution, , if
(173) |
and the spatial curvature is non-vanishing:
(174) |
Because the energy density and pressure must be of the opposite sign, these conditions can’t be fulfilled in a universe containing only radiation and matter. Einstein looked for a static solution because at the time, the expansion of the universe had not yet been discovered. He added the cosmological term, whereby one can satisfy the static conditions with
(175) |
This Einstein-static universe is empirically of little interest since the 1920’s when Hubble established that the universe is expanding. Einstein described introducing the term as a “blunder” – but the term was later resurrected to describe the observed accelerated expansion.
Starting from the cosmological principle that the Universe is homogeneous and isotropic, we can derive a spatial metric. It then turns out that our previous guess at a spatial metric, Eq. (23) was valid but not general.
The more general form incorporates both a scalefactor (changing with time) and spatial curvature . Universes with , and are respectively referred to as open, flat and closed.
The general form can be written as Eq. (129). There are actually many different ways to write this metric, but they all contain the same physical information – if this seems surprising, just think about the many different coordinate systems for the flat case (e.g. spherical or cylindrical polar coordinates): things can look different but represent the same space. If (and only if) the spatial curvature is zero (flat), one way to write the metric is our original guess, Eq. (23).
The density of matter, radiation, or any other content of the Universe changes with scalefactor depending on its equation of state . Matter, radiation and the cosmological constant have , and respectively. For some purposes we can also think of curvature as being another source of “energy” in the Universe with equation of state .
Whether the Universe is spatially curved or flat comes down to the density of material relative to the critical density which itself depends on the expansion rate, through Eq. (139). We define density parameters as the ratio between a given density and the critical density. The Friedmann equation then links the value of to these density parameters; see e.g. Eq. (141).
The real Universe is observed to be spatially flat within experimental errors. We discussed how the actual densities involved are estimated.
These observations also show the existence of a cosmological constant, accelerating the expansion of our Universe. Its existence can be independently demonstrated using supernovae measurements or BAO measurements.
The cosmological constant has similar characteristics to vacuum energy expected from quantum mechanics, but with drastically the wrong magnitude. Sometimes the phenomenon of accelerated expansion is said to be due to “dark energy”, but this doesn’t really carry a specific meaning – it just sounds good on grant proposals.
A cosmological constant can make the dynamics of the Universe a little counterintuitive. In particular, a universe can be spatially closed but still expand forever if there is a positive cosmological constant.