One of the greatest achievements of the scientific era is figuring out the scale of the universe we live in. It is related to a very important story in the world of astronomy this week. Let’s start at the beginning, though. And I mean, the very beginning.
The initial reticence about Copernicus’ heliocentric model was not, as sometimes painted, dogmatic attachment to geocentrism, but its implications for the size of the celestial sphere. Once you made the Sun the centre of the universe, the supposed orb to which the stars were fixed should appear to wobble from the point of view of Earth in its annual orbit. No such wobble was seen, therefore Copernicus must be wrong and his 1543 book, De Revolutionibus, made few waves with either science or the church.
The heliocentric model had little else to recommend it. Although it simplified some of the mathematics of the Ptolemaic geocentric model, it didn’t simplify it by much. That’s because Copernicus was wedded to the idea of circular planetary orbits, so he still needed complicated epicycles or “wheels within wheels” on which to mount the celestial bodies. Things changed quickly with three new developments. First, Galileo turned a telescope on the sky for the first time in 1609. in 1610 be published his startling discovery of “stars” (actually moons) orbiting Jupiter. This discovery on its own displaced Earth as the centre of all motion. He subsequently observed phases of Venus associated with its motion around the Sun. Then, in the same decade Kepler published his laws of planetary motion (ironically rejected by Galileo). It took some time for Kepler’s Laws to be accepted, starting with a successful prediction of a transit of Mercury across the Sun, observed by Gassendi in 1631. It was not until Newton showed in 1687 that Kepler’s ellipses were the inevitable result of a central force law of gravity that the whole edifice of the new solar system astronomy became mainstream.
Meanwhile the scale of the solar system was being probed for the first time. Kepler’s Laws gave a precise relationship between the size of the planetary orbits and their periods. The periods could be measured, but none of the distances were known. Just one measurement could be used to extrapolate all the distances. In 1672, Cassini and Richer made simultaneous measurements of Mars from Paris and French Guiana. The angular shift of Mars against the background stars – its parallax – was used to measure a distance between the orbits of Mars and Earth. Hipparchus had used this technique to measure the distance to the Moon to amazing accuracy almost 2,000 years earlier. By extrapolation from Cassini’s result, the scale of everything else in the solar system was known. The Earth-Sun distance was determined to within 6% of the modern value. This wasn’t improved upon until observation of a transit of Venus in 1763.
Now that the heliocentric model was firmly established, the problem of Copernicus came back to haunt astronomers. Saturn was determined to orbit an incredible 1.5 billion kilometres from the Sun. And yet the lack of stellar parallax implied improbably vaster distances again. Nevertheless, the physical evidence for heliocentrism was beyond dispute at this stage, so the problem of detecting stellar parallax was assumed to be one of accuracy. Knowing the scale of the solar system provided a new baseline for parallax measurements. By taking measurements six months apart, the 300m km diameter of the Earth’s orbit could be used. It still took until 1838 before Bessel detected the first parallax shift of a star. Nobody had suspected that the nearest star would turn out to be 40 trillion kilometres distant!
It was more convenient to measure distances to stars in light years, four light years to the nearest star. Parallax measurements were possible out to a few hundred light years distance. (Recently, space-based astrometry has extended this by two orders of magnitude. That’s still only the distance to the centre of our own galaxy). If we wanted to measure even greater distances, we would need new techniques of a wholly different nature. While studying variable stars in 1908, Leavitt discovered a relationship between the brightness of certain types of variables and their period of pulsation. The distance to nearby variables were known from parallax measurements. There is a well-known distance relation between the apparent brightness of a star as seen from Earth and its intrinsic brightness. Leavitt could therefore establish the intrinsic brightness of Cepheids of a given pulsation rate. Now she could use this to determine the distances to Cepheids that were too far to measure by parallax.
Cepheids are visible in nearby galaxies and Leavitt’s work was instrumental in the discovery that the galaxies are separate systems of stars separated by vast distances. This so-called Great Debate was settled by Hubble when, in 1924, he used Cepheids to measure the distance to our neighbouring Andromeda galaxy. After a series of galaxy measurements he made his most momentous discovery in 1929. It was known since Slipher in the early 1900s that the light from some galaxies was reddened compared to others. Hubble showed that the amount of reddening was proportional to the distance determined by the Cepheid measurements. Moreover, the reddening was caused by rapid movement away from our own galaxy, so that the galaxies were getting further apart. This marked the amazing discovery of the expansion of the universe.
Now the measurement of distances took on a different significance. It was not just determining the scale of the universe but also its rate of expansion. Cepheids are very large variable stars and so can be seen at very great distances. But cosmological distances are so vast that they are good only for relatively nearby galaxies. Just as planetary parallaxes had given us the baseline for stellar parallax, and stellar parallax led to Cepheid measurements, we needed another rung on the “Cosmic Distance Ladder” to lift us to yet bigger scales. Unfortunately it is not easy to find tight relationships between distance and other observables. A number of different ones have been found over the years, but most are fairly loose or have only statistical value. What we need is a reliable and bright “standard candle” that lets us determine the very largest distances in the cosmos.
This is where supernovae came to the rescue. These are stellar explosions so bright they can be seen from the other side of the universe. Astronomers classified supernova explosions according to features of their spectra, but over the course of the 20th century the physical mechanisms behind them came to be understood. The Type II explosions were no use as standard candles. They are core collapse events, where a star that has exhausted its fuel can no longer sustain its own weight. Its core is crushed by gravity to a neutron star or black hole depending in the star’s original mass. And therein lies the problem – stars come in different sizes and so too do Type II supernova explosions. There is very little standard about them.
The Type Ia supernovae are a different kettle of fish. These occur when a white dwarf star accretes matter from a binary companion. Most stars in the universe are binaries, so they evolve alongside another star. Late in life, one star may become a dense white dwarf while the companion goes through the red giant stage. The surface gravity of the huge red giant cannot hold down its material, which begins to be siphoned off by the white dwarf. The latter can increase its mass above a certain crucial limit at which it cannot withstand the gravitational force. Unlike a core collapse supernova, though, the white dwarf undergoes a detonation which destroys it completely. Because the white dwarf approaches the critical mass slowly, all Type Ia explosions have a standard size and brightness. This is the sort of standard candle we want! Actually it’s not quite that simple – there is variation among Type Ia events, but the length of the afterglow which is caused by radioactive decay of elements created in the explosion can determine the peak intrinsic brightness. The picture below shows a series of light curves from the Calán/Tololo Supernova Survey. After scaling for duration all the supernovae conform to a standard light curve with little scatter.
Armed with this knowledge astronomers set out to get the best set of distance measurements yet, and by combining red shift measurements to calibrate the expansion rate of the universe. In 1998 two separate teams – the Supernova Cosmology Project led by Saul Perlmutter and the High-Z Supernova Search Team formed by Brian Schmidt and led by Adam Riess – used Type Ia supernovae for this task. The result is history, and it shocked the world. Measurements made by Hubble and subsequent observers showed a scatter in the redshift-distance relationship through which a straight line could be drawn to give the expansion rate. The 1998 measurements were tight enough to show that the relation was not a straight line after all. The expansion rate fell with distance. This means that universe is expanding faster now than it was in the past. The accelerating expansion is what has come to be explained by dark energy.
It is hard to explain in brief how thoroughly dark energy has been woven into theories of cosmic evolution. All the way back in 1916, Einstein’s General Theory of Relativity had contained an optional parameter which Einstein himself believed might be the key to stopping the universe from collapsing under its own weight. This cosmological constant, denoted by Λ (the Greek uppercase letter lambda), turned out to be unnecessary after Hubble’s discovery of the galactic red shift. The universe wasn’t collapsing because it was expanding. Einstein called it “my biggest blunder”. The cosmological constant hadn’t gone away, though, and was ready and waiting to be rehabilitated when the 1998 expansion measurements came in. Meanwhile, there were other problems with the cosmos. Its apparent uniformity on large scales could not be explained by cosmic expansion. If you look opposite directions in the sky you are looking at patches that have not been in contact with each other since shortly after the Big Bang. Quantum fluctuations in that era should have led to variations in density yet these are not seen. In the 1980s this was the subject of a new theory of cosmic inflation which posited that the universe underwent an extremely rapid initial expansion that smoothed out the fluctuations. The tentative explanation for the inflationary expansion was some sort of “vacuum energy” inherent in space itself. Dark energy might be some sort of left over remnant of that initial kick, after decaying to a lower energy level.
So dark energy had a triple role in explaining the accelerating expansion, the leftover vacuum energy from cosmic inflation, and a nice place to hang the cosmological constant term of Einstein’s equations. It seemed almost too neat to be true, and it probably was. Combined with another hypothesis of dark matter which was needed to explain various unseen gravitational components of galaxies, the standard model of how the universe evolved was called the ΛCDM concordance (lambda plus Cold Dark Matter). A generation of science students (including yours truly) has been told that any misgivings about not being able to see the 95% of the the universe which is dark are misplaced.
Roll forward to 2011 and Perlmutter, Schmidt and Riess are receiving the Nobel Prize for Physics, “for the discovery of the accelerating expansion of the Universe through observations of distant supernovae”. But even then there were murmurings about possible problems with the supernova measurements. One team came up with evidence that not all Type 1a events were the same size, even after adjusting for light curve duration. Now this would be a problem, though not necessarily a fatal one. If there is some scatter in the luminosity distance relationship it is ok as long as it is small enough and random enough. Statistically, the conclusions may still be ok.
But what if there is a systematic bias in the measurements? For some time there has been concern that the Type 1a light curve may be different in stellar populations of different ages. The more redshifted events are further away and therefore older. Which means the stars involved are younger as they evolved in an earlier epoch. A variation in the light curve with redshift would be a potential disaster for the measurements, as it would affect the whole basis on which the accelerating expansion was computed.
That’s where this week’s news comes in, showing that the key assumption is in error. A research team has found that older stellar populations produce brighter Type 1a supernovae, with a 99.5% degree of confidence. The bombshell is that “when the luminosity evolution of SN is properly taken into account, the team found that the evidence for the existence of dark energy simply goes away”.
Commenting on the result, Prof. Young-Wook Lee (Yonsei Univ., Seoul), who led the project said, “Quoting Carl Sagan, extraordinary claims require extraordinary evidence, but I am not sure we have such extraordinary evidence for dark energy. Our result illustrates that dark energy from SN cosmology, which led to the 2011 Nobel Prize in Physics, might be an artifact of a fragile and false assumption.”
If this is accepted it’s going to be quite a shock to the system for the physics community. That’s not guaranteed, of course, but the latest result is based on nine years of work so can’t be lightly discarded. It’ll be published in this month’s Astrophysical Journal, but in the mean time here’s an arxiv preprint along with the abstract:
Early-type Host Galaxies of Type Ia Supernovae. II. Evidence for Luminosity Evolution in Supernova Cosmology
Yijung Kang, Young-Wook Lee, Young-Lo Kim, Chul Chung, Chang Hee Ree
The most direct and strongest evidence for the presence of dark energy is provided by the measurement of galaxy distances using type Ia supernovae (SNe Ia). This result is based on the assumption that the corrected brightness of SN Ia through the empirical standardization would not evolve with look-back time. Recent studies have shown, however, that the standardized brightness of SN Ia is correlated with host morphology, host mass, and local star formation rate, suggesting a possible correlation with stellar population property. In order to understand the origin of these correlations, we have continued our spectroscopic observations to cover most of the reported nearby early-type host galaxies. From high-quality (signal-to-noise ratio ~175) spectra, we obtained the most direct and reliable estimates of population age and metallicity for these host galaxies. We find a significant correlation between SN luminosity (after the standardization) and stellar population age at a 99.5% confidence level. As such, this is the most direct and stringent test ever made for the luminosity evolution of SN Ia. Based on this result, we further show that the previously reported correlations with host morphology, host mass, and local star formation rate are most likely originated from the difference in population age. This indicates that the light-curve fitters used by the SNe Ia community are not quite capable of correcting for the population age effect, which would inevitably cause a serious systematic bias with look-back time. Notably, taken at face values, a significant fraction of the Hubble residual used in the discovery of the dark energy appears to be affected by the luminosity evolution. We argue, therefore, that this systematic bias must be considered in detail in SN cosmology before proceeding to the details of the dark energy.
EDIT: My girl Sabine weighs in on this and other dark energy demolitions (plus some rebuttals):