Amateur Astronomy


Have a look at this boards thread …

I was out trying (and failing) to spot them yesterday. At 6.41 am there were thirty satellites chasing each other in a line in just a seven minute window. Even when fully spread out in the plane, last week’s single launch will result in a satellite passing over every 90 seconds. At mag. 2.5 they would be easily visible to the naked eye (about 60% as bright as the Pole star) and presumably fairly disruptive to photography. And 7,500 of the 12,000 proposed satellites will be at the 350 km height that produces this maximum brightness. I suppose clever telescope scheduling software could maybe omit exposures that were going to be affected.

If anyone wants to spot these from Ireland, the sats from the recent launch are getting later and fading into the dawn light over the next few days. Tomorrow, Monday, might be the last time to catch the “string of pearls” with thirty sats clustered together. Look south at 7.33 a.m. and the satellites will all pass from west to east over the course of six or seven minutes. They’ll be at 60 degrees elevation – two thirds of the distance from horizon to zenith. They are up to magnitude 3 now, so more difficult to spot than a couple of days ago and with sunrise at 8.15 am there’ll be plenty of light in the sky.

Blindjustice, I don’t know where you are at present, but the sats become prominent in New South Wales on evening of 4th December. String of 30 pearls from horizon to horizon and passing almost overhead from 21.45. How about deploying your awesome photography skills? :icon_smile:


What’s their path? I’m in Adelaide


Next string-o’-pearls bright pass from there is December 5th, 9.37pm -ish. Magnitude 2.6 and they go literally 90 degrees overhead.

P.S. Just got clouded out from here for the second day. I was wrong about it being last chance saloon. Last chance is tomorrow, 6.20am.


Winter is over! Or so I claim at this time each year. In the USA the weatherman will tell you that meteorological winter doesn’t even start until the solstice. That’s on December 22nd. At that point our days are getting longer again.

Not being a morning person, I like to count from when the sunset stops getting earlier. Because of eccentricity of the Earth’s orbit, the solstice is neither the day when the sun rises latest nor sets earliest, even though it is the shortest day. The latest sunrise is a week after the solstice and the earliest sunset a week before. This year it’s around the 13th or 14th of December.

But almanacs tend to only give the time of sunset to the nearest minute, and this changes very slowly around the solstice. In Dublin the sunset stalls at 4:06 pm from the 9th until the 18th of December. So I cherrypick the first day of the earliest sunset to the nearest minute … which was yesterday.

My cheating approach saves nearly two weeks compared to waiting for the solstice. Also from a practical point of view, while the continental US has yet to see the heavy snows of winter, here in Ireland a mild spell could see the trees start to bud again anytime from now on. So while it might be too cheeky to call this the start of Spring, I wish you …

Happy “earliest-sunset-to-the-nearest-minute” day!


The European CHEOPS spacecraft was due to launch from French Guyana this morning. Launch was postponed in the last hour of countdown and has been set back by at least a day due to a software error.

CHEOPS is the ugly acronym for CHaracterising ExOPlanets Satellite. The objectives are more elegant. It will follow up known exoplanet observations with precision radius measurements. The two previous exoplanet space missions, Kepler and TESS (still ongoing), observed broad swaths of the sky for long periods to detect exoplanets transiting across their host stars. Kepler stared at a fixed 115 square degree region, mostly finding exoplanets of 600 to 3,000 light years distance. TESS is covering almost the entire sky in 2,200 square degree strips. These are far bigger than the Kepler region, but TESS is only looking at brighter nearby stars, less than 300 ly distant.

The transit method looks for the dip in light from a star as a planet traverses across its face. Only a very small fraction of stellar systems are aligned edge-on to us so that we see planetary transits, but statistically the number is still high. Light curves from a number of successive transits are folded into a single higher precision curve which is used to estimate the planet’s radius. High precision is required. The dip in light is proportional to the area of the planet’s disc and thus the square of its radius. A Jupiter-like planet (10% of the star’s radius) produces a 1% light dip. An Earth-like planet (1% of stellar radius) produces a 0.01% dip.

Candidate detections are then followed up with ground-based spectroscopic measurements. These measure the Doppler shifts caused by the wobble induced in the star by the planet’s gravity. It’s essentially a speeding camera for stars. Amazingly, the speed of motion of a star’s surface can be measured from thousands of light years away almost as accurately as your car can be clocked from a few hundred metres. With such extreme accuracy, planetary masses can be well determined.

The weak link is the radius measurements from the spacecraft light curve measurements. The 0.01% dip for an Earth-like planet crossing a Sun-like star is only 100 parts per million. Kepler was designed to comfortably exceed that for a long exposure of a fairly bright star. But the stars themselves have some intrinsic variability, and fainter stars produce noisier images. And bear in mind that Kepler was looking at a large area of sky, so even with its 100 megapixel camera the light from each star was only falling on a handful of pixels. When you don’t know what you’re looking for you have to look at hundreds of thousands of stars simultaneously in order to get a good harvest of those chance aligned edge-on systems.

This is where CHEOPS comes in. It’s not looking for new exoplanets. It’s looking for more accurate radius measurements of already known ones. This means it can point directly at known sources instead of at a large patch of sky, and it will do several thousand pointings over its mission life. Its field of view is therefore less than a tenth of a square degree, a thousand times smaller than Kepler’s and 25 thousand times smaller than TESS’s. The onboard telescope isn’t even designed to be in focus. Instead, the light from the target star will be spread out across the CCD detector surface for higher precision photometry.

CHEOPS is one of ESA’s S-class missions. That’s S for Small, with a budget capped at $50m. The onboard telescope is modest in size at only 30 cm (12-inch) aperture, but the data will give planet radii to better then 10% accuracy. Combined with Doppler mass measurements this gives the density of the planets to better precision than previously known. This in turn constrains planetary composition and informs models of solar system formation. There’s a real conundrum about how certain giant planets got to be so close to their host stars – whether they formed in place by accretion or migrated inward after formation. We’ll also get to know more about Earth-like planets and about the class of planets called “super-Earths”. These are between the size of Earth and Neptune, which are very common among the Kepler discoveries but not represented at all in our own solar system.

With Kepler data still being analysed, TESS in progress, and CHEOPS to come, the exoplanet space is going to be pretty exciting over the coming years. And that’s not even counting the ground-based surveys that will be possible with the next generation of giant telescopes under construction.


Speak of the devil. No sooner had I written that post about CHEOPS than I came across this paper from just two weeks ago: Kite, Fegley Jr., Schaefer & Ford (2019), “Superabundance of Exoplanet Sub-Neptunes Explained by Fugacity Crisis”, arXiv:1912.02701.

So I mentioned that there are a lot of exoplanets – nearly 40% of those discovered by Kepler – which are in the super-Earth category. These are less then approximately three times the radius of Earth. Then, as we approach the Neptune radius of the number of planets falls off a cliff. (The diagram is from a short but technical introduction to Kepler’s discoveries in this excellent PNAS article by Natalie Batalha, Kepler Mission Scientist. For more, also look up Dr. Batalha’s excellent talk [or maybe two?] at Silicon Valley Astronomy Lectures from Foothills College.)

We have to be careful because there are many observational biases built into exoplanet surveys. They tend to find what they are best at finding. Nevertheless, the Neptune radius cliff appears to be real. The latest paper has a pretty wild idea. It’s because the planet’s atmosphere dissolves in its mantle.

The hypothesis goes like this: planets grow by accretion of leftover material in the early solar system. Above a certain size they are able to gravitationally hold down an atmosphere. The atmosphere gets more and more dense with increasing planet size. The rocky core of the planet is also hotter, and probably has oceans of liquid magma on its surface.

At a certain density the hydrogen gas in the atmosphere starts to become degenerate. We’ve discussed degeneracy before in the context of stars, but suffice it to say that the hydrogen stops behaving like an ideal gas. This is where “fugacity” kicks in. The hydrogen suddenly prefers to dissolve in the mantle instead of continuing to pile up in the atmosphere. According to the authors, “this sequestration acts as a strong brake on further growth”. Material can continue to accrete but the planet will not get any bigger. That’s what gives rise to the clustering effect in the diagram above. Only a smaller number of planets accrete enough material to continue growing.

Of course, this could be complete hogwash as many new ideas in science turn out to be. But it’s an interesting idea.


I thought we might pose a little seasonal maths problem:

It starts snowing in the morning and continues steadily throughout the day. A snow plough that removes snow at a constant rate starts ploughing at noon. It ploughs 2 miles in the first hour, and 1 mile in the second. What time did it start snowing?

It might sound initially like there isn’t enough information to answer the question, but there’s quite a neat solution. Answers on a postcard by tomorrow. Then we’ll talk about how this problem relates to the way the colours of stars are measured!


Ok, let’s do this in three pieces. First to conceptualise the snow plough problem. Then all the people jostling to have a stab at the maths can still do so ( :icon_wink: ). Then the star colours. I hope it all hangs together.

First bit first. The snow plough removes snow at a constant rate, i.e. a constant volume in unit time. Let’s pick a small time interval in which we can ignore, for the sake of approximation, that the snow is continually falling and getting deeper. So now we can picture the removed snow as a rectangular block with a length, a breadth, and a height. The breadth is easy – that’s just the width of the snow plough’s shovel, which is constant. The height is easy too – that’s the depth of the snow. And the length is how far the snow plough travels.

Now let’s compare this with some other time interval in which the snow depth has increased. In order to shovel a constant volume per unit time, the snow plough must travel less distance in that time, i.e. its speed decreases. But it’s snowing at a constant rate so the snow depth is proportional to the time. And we’ve just figured out that the plough speed is inversely proportional to the snow depth, and thus to the time.

Let’s just assume, for no good reason, that it’s snowing at a rate of “one”. One what? It doesn’t matter. It just means we can make the plough speed equal to the inverse time:


So the blue curve in this graph represents the snow plough velocity. And as with all velocity graphs, the distance travelled is given by the area under the curve in, for instance, the two equal time intervals shown which we’ll take to be hours. One neat thing about this particular velocity graph, v(t) = 1/t, is that its slope is different at every single point. It gets flatter and flatter … but never completely flat. And that means the distance travelled is smaller in each successive hourly interval. And there is a unique place on the graph where the distance travelled in one hour is double that travelled in the next hour.

Now we come to the key insight in the snow plough problem. If you change the rate of snowfall so that the 1/t becomes 1/Ct (where C is some constant rate of snowfall), you just scale everything up or down. In the graph above only the number scale on the vertical axis would change. The ratios of distance travelled in consecutive hour intervals wouldn’t. So the answer doesn’t depend on how hard it’s snowing! And that’s good because you weren’t given that.

Now all you have to do is find which is the unique hour interval to satisfy the problem, and work back to when it started snowing!

… N.B. I never said this was easy :smiley:


Let’s do the maths. This is less important than the concepts in the previous post.

Snow falls at a constant rate , starting hours before midday. Assume snow plough speed at time is inversely proportional to snow depth, . Therefore:

The plough moves twice as far in the first hour after midday as it does in the second. See graph in previous post. We integrate to get the area under the curve.

(This is where we notice our constant has been cancelled by division and the answer is going to be independent of snowfall rate.)

The quadratic formula gives a positive root of:

(Interestingly, this is the inverse golden ratio).

It started snowing at midday - 0.61804 hr, or 11:22:55 am.


The reason the snow plough problem was solvable was because the answer did not depend on the snowfall rate. And that’s because the velocity of the snow plough was determined by a homogeneous function:

These types of functions exhibit something called scale invariance. The ratio of distance travelled in successive time intervals only depended on the shape of the curve, not its scale.

Let’s turn our attention to stars. They generate nuclear fusion energy which, after diffusion through the star’s envelope is radiated through its surface. The total energy per unit time is called the radiant power or radiant flux, more commonly called luminosity in astronomy. It’s measured in Watts, which is the same as Joules (of energy) per second.

When we talk about the radiant flux of a star we mean the bolometric flux, which is the power emitted at all wavelengths of electromagnetic energy. The surface of a star is a big mass of jostling particles in thermal equilibrium and so it emits radiation at different wavelengths according to Planck’s Law, which we have discussed numerous times. The Planck curve gives the amount of energy emitted at each wavelength for a given temperature. The curve always has the same shape, but shifts to higher power and shorter wavelength for higher temperatures:

You’ll notice the vertical axis in the graph above measures spectral radiance, i.e. power (in Watts) per unit solid angle (in steradians) per unit area of detector surface (in square metres), per unit wavelength (in nanometres). Radiance is a bit different from radiant flux and is a slightly tricky concept that we won’t go into. Suffice it to say that radiance is a conserved quantity, and so is spectral radiance. If it wasn’t, stars would change colour as you got closer to them.

What we can measure through a telescope is flux density, the amount of power collected per unit area of our detector. We can split the light up into its constituent wavelengths using a spectroscope. But that takes a long time and can generally only be done one star at a time. Sometimes we just want an overall measurement which will characterise the colour of the star. One reason for our interest in this is that the colour tells us the temperature.

A clever way to measure the colour of the star is by using a colour index. This is the ratio of the heights of the Planck curve at two different wavelengths. We can measure these using different narrowband colour filters. We have to digress a moment here to mention that in astronomy we measure apparent brightnesses of stars on a logarithmic scale, called the magnitude scale. The logarithm of the ratio of the two quantities is the same as the difference of their logarithms. Logarithms turn division into subtraction. The ratio of two apparent stellar brightnesses is the difference of their magnitudes.

We can get a better idea of the invariant shape of the Planck curve by looking at it on a logarithmic scale:

As well as the Planck curves for different temperatures, the diagram shows the location of the visible spectrum within the broader electromagnetic spectrum. The peak of the Planck curve for many stars is in, or near, the visible spectrum. This is not an accident – our vision evolved to see the brightest range of wavelengths emitted by our own star, the Sun. A common colour index that is used is called (B - V). ‘B’ stands for blue and ‘V’ for visible (which to an astronomer means green because it’s at the centre of the visible spectrum). You can see from the logarithmic Planck curves that their slopes change continuously. Measuring the slope between the B and V magnitudes gives a unique value for the colour of a star.

You can also see that for objects much cooler than 3000 K the (B - V) colour index is measuring a very steep part of the Planck curve, which makes it difficult to do accurately as very little energy is radiated at the short wavelengths. Here we might use a different colour index such as (R - I) (red and infrared). For the hottest stars we might use (U - B) (ultraviolet and blue).

The slope of the Planck curve is upward (positive) on the left, and downward (negative) on the right. However, because the astronomical magnitude scale becomes more negative with increased brightness, the signs are reversed. Colour indexes are negative for hotter stars, and positive for cooler ones. Here are the (B - V) indexes for two stars in Orion, a blue giant and a red giant, with the Sun in between for comparison. This time we’ll put the temperature on the vertical axis.

If you go out one of these winter evenings and spot Orion in a dark sky, you’ll see these colour differences with the naked eye:

I think it’s fascinating that you can tell which stars in the sky are hotter by their colour. That’s something nobody knew until a little over a century ago. Don’t confuse hotter with bigger, though. Bellatrix is about six times the size of our Sun, whereas the much cooler Betelgeuse is a thousand times the Sun’s diameter and is one of the biggest stars we can see with the naked eye. On the other hand, don’t assume stars of the same temperature are the same size. Our nearest star, Proxima Centauri, is the same colour as Betelgeuse but is an M dwarf only one sixth the size of our Sun.

We can sort out these oddities only through knowledge of stellar evolution. On the main sequence – the stage of a star’s life where it burns hydrogen to helium – hotter does indeed mean bigger. Old stars that have depleted their hydrogen move off the main sequence and swell up to become red giants. They’re much hotter than main sequence stars on the inside, but cooler on the outside.


Happy Perihelion Day! Feel the warmth … because over the next six months you’ll be getting five million kilometres further away from the Sun. That’s more than a thousand kilometres an hour on average.

Could we be about to get a whole lot closer to a different star? Or, at least, the exploding guts of another star. Betelgeuse, mentioned in the last post, has been dimming recently. That on its own is unsurprising as it’s a variable star. But it has dimmed below any previously known value. Although it may seem a strange hobby, there are amateurs who devote their time to measuring the brightness of stars. It’s one of the areas where an amateur can contribute to real science that, paradoxically, the professional scientists have difficulty doing due to large telescope time constraints.

And so, the American Association of Variable Star Observers and others take their scopes and their photometers and collect data like this:

Betelgeuse hasn’t been fainter than magnitude 1.3 since 1927, but the latest measurement is a considerably fainter 1.5. It’s nearly five times dimmer than the brightness peak you can see in the graph around 2016. What it heralds, nobody knows. But we know Betelgeuse is living on borrowed time, having burned up its core hydrogen. It will inevitably suffer a core collapse some day, but whether it will happen tomorrow or 10,000 years from now we cannot say. We’re also way overdue a supernova in the Milky Way. The statistics tell us we get one on average every 100 years but get this – the last one was in 1604!

That means there hasn’t been a supernova in our galaxy during the telescopic era. A ringside seat for one as close as Betelgeuse would be a gift. That said, it’s good to be sitting in the back row – a volume of space fifty light years in all directions from the star will be sterilised of any life that might have existed. We’re a safe 700 light years away. But sparks will still fly. A Betelgeuse supernova would be brighter than the full Moon, would be visible in broad daylight, and last for weeks. At the end of it, poor Orion the Hunter would be an amputee, separated from his left shoulder forever.

Chances are Betelgeuse will recover from its current malaise and totter on a while longer. But it makes a good story, and there’s still a chance of a spectacle in our lifetimes.


One of the greatest achievements of the scientific era is figuring out the scale of the universe we live in. It is related to a very important story in the world of astronomy this week. Let’s start at the beginning, though. And I mean, the very beginning.

The initial reticence about Copernicus’ heliocentric model was not, as sometimes painted, dogmatic attachment to geocentrism, but its implications for the size of the celestial sphere. Once you made the Sun the centre of the universe, the supposed orb to which the stars were fixed should appear to wobble from the point of view of Earth in its annual orbit. No such wobble was seen, therefore Copernicus must be wrong and his 1543 book, De Revolutionibus, made few waves with either science or the church.

The heliocentric model had little else to recommend it. Although it simplified some of the mathematics of the Ptolemaic geocentric model, it didn’t simplify it by much. That’s because Copernicus was wedded to the idea of circular planetary orbits, so he still needed complicated epicycles or “wheels within wheels” on which to mount the celestial bodies. Things changed quickly with three new developments. First, Galileo turned a telescope on the sky for the first time in 1609. in 1610 be published his startling discovery of “stars” (actually moons) orbiting Jupiter. This discovery on its own displaced Earth as the centre of all motion. He subsequently observed phases of Venus associated with its motion around the Sun. Then, in the same decade Kepler published his laws of planetary motion (ironically rejected by Galileo). It took some time for Kepler’s Laws to be accepted, starting with a successful prediction of a transit of Mercury across the Sun, observed by Gassendi in 1631. It was not until Newton showed in 1687 that Kepler’s ellipses were the inevitable result of a central force law of gravity that the whole edifice of the new solar system astronomy became mainstream.

Meanwhile the scale of the solar system was being probed for the first time. Kepler’s Laws gave a precise relationship between the size of the planetary orbits and their periods. The periods could be measured, but none of the distances were known. Just one measurement could be used to extrapolate all the distances. In 1672, Cassini and Richer made simultaneous measurements of Mars from Paris and French Guiana. The angular shift of Mars against the background stars – its parallax – was used to measure a distance between the orbits of Mars and Earth. Hipparchus had used this technique to measure the distance to the Moon to amazing accuracy almost 2,000 years earlier. By extrapolation from Cassini’s result, the scale of everything else in the solar system was known. The Earth-Sun distance was determined to within 6% of the modern value. This wasn’t improved upon until observation of a transit of Venus in 1763.

Now that the heliocentric model was firmly established, the problem of Copernicus came back to haunt astronomers. Saturn was determined to orbit an incredible 1.5 billion kilometres from the Sun. And yet the lack of stellar parallax implied improbably vaster distances again. Nevertheless, the physical evidence for heliocentrism was beyond dispute at this stage, so the problem of detecting stellar parallax was assumed to be one of accuracy. Knowing the scale of the solar system provided a new baseline for parallax measurements. By taking measurements six months apart, the 300m km diameter of the Earth’s orbit could be used. It still took until 1838 before Bessel detected the first parallax shift of a star. Nobody had suspected that the nearest star would turn out to be 40 trillion kilometres distant!

It was more convenient to measure distances to stars in light years, four light years to the nearest star. Parallax measurements were possible out to a few hundred light years distance. (Recently, space-based astrometry has extended this by two orders of magnitude. That’s still only the distance to the centre of our own galaxy). If we wanted to measure even greater distances, we would need new techniques of a wholly different nature. While studying variable stars in 1908, Leavitt discovered a relationship between the brightness of certain types of variables and their period of pulsation. The distance to nearby variables were known from parallax measurements. There is a well-known distance relation between the apparent brightness of a star as seen from Earth and its intrinsic brightness. Leavitt could therefore establish the intrinsic brightness of Cepheids of a given pulsation rate. Now she could use this to determine the distances to Cepheids that were too far to measure by parallax.

Cepheids are visible in nearby galaxies and Leavitt’s work was instrumental in the discovery that the galaxies are separate systems of stars separated by vast distances. This so-called Great Debate was settled by Hubble when, in 1924, he used Cepheids to measure the distance to our neighbouring Andromeda galaxy. After a series of galaxy measurements he made his most momentous discovery in 1929. It was known since Slipher in the early 1900s that the light from some galaxies was reddened compared to others. Hubble showed that the amount of reddening was proportional to the distance determined by the Cepheid measurements. Moreover, the reddening was caused by rapid movement away from our own galaxy, so that the galaxies were getting further apart. This marked the amazing discovery of the expansion of the universe.

Now the measurement of distances took on a different significance. It was not just determining the scale of the universe but also its rate of expansion. Cepheids are very large variable stars and so can be seen at very great distances. But cosmological distances are so vast that they are good only for relatively nearby galaxies. Just as planetary parallaxes had given us the baseline for stellar parallax, and stellar parallax led to Cepheid measurements, we needed another rung on the “Cosmic Distance Ladder” to lift us to yet bigger scales. Unfortunately it is not easy to find tight relationships between distance and other observables. A number of different ones have been found over the years, but most are fairly loose or have only statistical value. What we need is a reliable and bright “standard candle” that lets us determine the very largest distances in the cosmos.

This is where supernovae came to the rescue. These are stellar explosions so bright they can be seen from the other side of the universe. Astronomers classified supernova explosions according to features of their spectra, but over the course of the 20th century the physical mechanisms behind them came to be understood. The Type II explosions were no use as standard candles. They are core collapse events, where a star that has exhausted its fuel can no longer sustain its own weight. Its core is crushed by gravity to a neutron star or black hole depending in the star’s original mass. And therein lies the problem – stars come in different sizes and so too do Type II supernova explosions. There is very little standard about them.

The Type Ia supernovae are a different kettle of fish. These occur when a white dwarf star accretes matter from a binary companion. Most stars in the universe are binaries, so they evolve alongside another star. Late in life, one star may become a dense white dwarf while the companion goes through the red giant stage. The surface gravity of the huge red giant cannot hold down its material, which begins to be siphoned off by the white dwarf. The latter can increase its mass above a certain crucial limit at which it cannot withstand the gravitational force. Unlike a core collapse supernova, though, the white dwarf undergoes a detonation which destroys it completely. Because the white dwarf approaches the critical mass slowly, all Type Ia explosions have a standard size and brightness. This is the sort of standard candle we want! Actually it’s not quite that simple – there is variation among Type Ia events, but the length of the afterglow which is caused by radioactive decay of elements created in the explosion can determine the peak intrinsic brightness. The picture below shows a series of light curves from the Calán/Tololo Supernova Survey. After scaling for duration all the supernovae conform to a standard light curve with little scatter.


Armed with this knowledge astronomers set out to get the best set of distance measurements yet, and by combining red shift measurements to calibrate the expansion rate of the universe. In 1998 two separate teams – the Supernova Cosmology Project led by Saul Perlmutter and the High-Z Supernova Search Team formed by Brian Schmidt and led by Adam Riess – used Type Ia supernovae for this task. The result is history, and it shocked the world. Measurements made by Hubble and subsequent observers showed a scatter in the redshift-distance relationship through which a straight line could be drawn to give the expansion rate. The 1998 measurements were tight enough to show that the relation was not a straight line after all. The expansion rate fell with distance. This means that universe is expanding faster now than it was in the past. The accelerating expansion is what has come to be explained by dark energy.

It is hard to explain in brief how thoroughly dark energy has been woven into theories of cosmic evolution. All the way back in 1916, Einstein’s General Theory of Relativity had contained an optional parameter which Einstein himself believed might be the key to stopping the universe from collapsing under its own weight. This cosmological constant, denoted by Λ (the Greek uppercase letter lambda), turned out to be unnecessary after Hubble’s discovery of the galactic red shift. The universe wasn’t collapsing because it was expanding. Einstein called it “my biggest blunder”. The cosmological constant hadn’t gone away, though, and was ready and waiting to be rehabilitated when the 1998 expansion measurements came in. Meanwhile, there were other problems with the cosmos. Its apparent uniformity on large scales could not be explained by cosmic expansion. If you look opposite directions in the sky you are looking at patches that have not been in contact with each other since shortly after the Big Bang. Quantum fluctuations in that era should have led to variations in density yet these are not seen. In the 1980s this was the subject of a new theory of cosmic inflation which posited that the universe underwent an extremely rapid initial expansion that smoothed out the fluctuations. The tentative explanation for the inflationary expansion was some sort of “vacuum energy” inherent in space itself. Dark energy might be some sort of left over remnant of that initial kick, after decaying to a lower energy level.

So dark energy had a triple role in explaining the accelerating expansion, the leftover vacuum energy from cosmic inflation, and a nice place to hang the cosmological constant term of Einstein’s equations. It seemed almost too neat to be true, and it probably was. Combined with another hypothesis of dark matter which was needed to explain various unseen gravitational components of galaxies, the standard model of how the universe evolved was called the ΛCDM concordance (lambda plus Cold Dark Matter). A generation of science students (including yours truly) has been told that any misgivings about not being able to see the 95% of the the universe which is dark are misplaced.

Roll forward to 2011 and Perlmutter, Schmidt and Riess are receiving the Nobel Prize for Physics, “for the discovery of the accelerating expansion of the Universe through observations of distant supernovae”. But even then there were murmurings about possible problems with the supernova measurements. One team came up with evidence that not all Type 1a events were the same size, even after adjusting for light curve duration. Now this would be a problem, though not necessarily a fatal one. If there is some scatter in the luminosity distance relationship it is ok as long as it is small enough and random enough. Statistically, the conclusions may still be ok.

But what if there is a systematic bias in the measurements? For some time there has been concern that the Type 1a light curve may be different in stellar populations of different ages. The more redshifted events are further away and therefore older. Which means the stars involved are younger as they evolved in an earlier epoch. A variation in the light curve with redshift would be a potential disaster for the measurements, as it would affect the whole basis on which the accelerating expansion was computed.

That’s where this week’s news comes in, showing that the key assumption is in error. A research team has found that older stellar populations produce brighter Type 1a supernovae, with a 99.5% degree of confidence. The bombshell is that “when the luminosity evolution of SN is properly taken into account, the team found that the evidence for the existence of dark energy simply goes away”.

Commenting on the result, Prof. Young-Wook Lee (Yonsei Univ., Seoul), who led the project said, “Quoting Carl Sagan, extraordinary claims require extraordinary evidence, but I am not sure we have such extraordinary evidence for dark energy. Our result illustrates that dark energy from SN cosmology, which led to the 2011 Nobel Prize in Physics, might be an artifact of a fragile and false assumption.”

If this is accepted it’s going to be quite a shock to the system for the physics community. That’s not guaranteed, of course, but the latest result is based on nine years of work so can’t be lightly discarded. It’ll be published in this month’s Astrophysical Journal, but in the mean time here’s an arxiv preprint along with the abstract:

Early-type Host Galaxies of Type Ia Supernovae. II. Evidence for Luminosity Evolution in Supernova Cosmology

Yijung Kang, Young-Wook Lee, Young-Lo Kim, Chul Chung, Chang Hee Ree

The most direct and strongest evidence for the presence of dark energy is provided by the measurement of galaxy distances using type Ia supernovae (SNe Ia). This result is based on the assumption that the corrected brightness of SN Ia through the empirical standardization would not evolve with look-back time. Recent studies have shown, however, that the standardized brightness of SN Ia is correlated with host morphology, host mass, and local star formation rate, suggesting a possible correlation with stellar population property. In order to understand the origin of these correlations, we have continued our spectroscopic observations to cover most of the reported nearby early-type host galaxies. From high-quality (signal-to-noise ratio ~175) spectra, we obtained the most direct and reliable estimates of population age and metallicity for these host galaxies. We find a significant correlation between SN luminosity (after the standardization) and stellar population age at a 99.5% confidence level. As such, this is the most direct and stringent test ever made for the luminosity evolution of SN Ia. Based on this result, we further show that the previously reported correlations with host morphology, host mass, and local star formation rate are most likely originated from the difference in population age. This indicates that the light-curve fitters used by the SNe Ia community are not quite capable of correcting for the population age effect, which would inevitably cause a serious systematic bias with look-back time. Notably, taken at face values, a significant fraction of the Hubble residual used in the discovery of the dark energy appears to be affected by the luminosity evolution. We argue, therefore, that this systematic bias must be considered in detail in SN cosmology before proceeding to the details of the dark energy.


EDIT: My girl Sabine weighs in on this and other dark energy demolitions (plus some rebuttals):


Lots of us watch the clock at this time of year on the lookout for a change in the seasons. Sunrises are getting earlier, sunsets are getting later. Today, January 12th, is the first sunset after 4.30 pm in Dublin. Although it’s a little premature yet, we’re getting ready to utter that timeless Irish phrase: “grand oul’ stretch in the evenin’s!

Yet anyone looking closely at the moment will see that the grand oul’ stretch in the evenings is not matched by a stretch in the mornings. Whereas sunset has gone out by twenty-five minutes since its earliest occurrence, sunrise has only come in by a measly five. Why the asymmetry? It has to to do with two things I mentioned coincidentally in the last two posts: Kepler’s Laws and perihelion.

As we know, the sky appears to rotate around us in the course of a day. The Sun rises in the east and sets in the west, crossing the meridian (the line through the zenith from north to south) at noon each day. This is the definition of local noon. We can measure time by the Sun. As the saying goes, “you can set your clock by it”. The interval between two successive meridional crossings is one solar day.

An astronomer measures the day by the rotation of the stars. The stars also rise in the east and set in the west, reaching their highest point as they cross the meridian. When we measure a sidereal day (i.e. by the stars, from Latin sidus) we find it is shorter than the solar day by 3 minutes 56 seconds. We can see why in this picture:

In the time that the Earth has done a full rotation to point at the same stars again it has also moved slightly in its orbit round the Sun. So the Sun has moved slightly eastward with respect to the stars and the Earth (which rotates west to east) has to turn a little bit more to face the Sun again. That makes the solar day longer than the sidereal day.

And that would be the end of the story if the Earth’s orbit around the Sun was perfectly circular. We would travel 360/365¼ – approximately 0.99 – degrees around our orbit each solar day. But this is where Kepler’s laws of planetary motion comes in. The first law tells us that the planets move in ellipses, not circles, around the Sun. A circle is a special case of an ellipse but none of the planetary orbits is exactly circular. The second law tells us how the planet moves around the ellipse. It says, somewhat cryptically, that “the radius to a planet sweeps out equal areas during equal time intervals”. Here’s a visualisation of what that means ( – ignore the arrows):

We divide the orbit up into equal time increments, so that the orbital radius sweeps through the blue “slice of cake” in each increment. Kepler’s 2nd law tells us that each slice of cake has the same area. Since the slice of cake is shorter when the planet is nearer the Sun it must also be fatter. That means the planet turns through a bigger angle per time increment when it is closer.

The picture above is a very exaggerated ellipse compared to the Earth’s orbit. Earth’s is only 3% further from the Sun at aphelion than at perihelion. Nevertheless the same effect applies, and Earth was at perihelion just a week ago on January 5th. That means it’s travelling about 1.1 degrees around its orbit daily at the moment. That in turn means the Earth has to rotate for half a minute longer than average in order to point back toward the Sun each day.

Now here’s the crunch. It would be a considerable pain if our clocks had to deal with a variable day length, sometimes half a minute longer, sometimes shorter, than average. Instead we decide that each day will be exactly 24 hours long, to match the average or mean. That’s why we call it Greenwich Mean Time. But the upshot is that the time by the clock doesn’t match the time by the Sun. At this time of year the Sun is reaching its highest point – local solar noon – half a minute later each day. The tilt of the Earth, which controls our seasons, is increasing the time between sunrise and sunset each day. But that day is sliding later and later with respect to clock time. So we are seeing almost all of the increase in the evenings, and almost none in the mornings.


I’m a slow learner when it comes to physics. Concepts don’t sink in unless I can form a mental picture, and the picture sometimes comes into focus maddeningly slowly over years. It’s about plugging new things learnt into the tenuous framework of things already known. I came across something the other day that excited me about how we can visualise light transmitted through empty space. You never truly understand something until you can explain it, so I’ll try to regurgitate it here.

We’re all familiar with the idea of sound as a wave. We can even picture it in terms of molecules of air buffeting each other and transmitting energy:

The vibrating speaker on the left produces the sound energy which accelerates air molecules which collide elastically with other air molecules, and so on. Although a sound wave propagates longitudinally, no individual molecule goes anywhere in the long run. You can see this if you look closely at any one dot – three of them are marked in red to make it easier to see.

Let’s think about what might affect the speed of sound. It seems clear that it would be affected by how quickly the gas molecules bounce off each other. That in turn depends on how closely packed they are, which is the bulk property of a gas that we call pressure. On the other hand, the sound will travel more slowly depending on how hard it is to move each individual molecule, and that depends on its mass. Back when this question was first pondered nobody knew that molecules even existed, and so they used the more tangible mass of a volume of air, in other words its density. And finally, since energy is being transmitted and there is a square root relationship between kinetic energy and speed in Newtonian mechanics, we arrive at Isaac Newton’s first attempt to characterise the speed of sound , through a gas of pressure and density (rho):

For reasons that we need not get into, Newton wasn’t quite correct as he failed to take into account some features of the way the gas is compressed during sound transmission. The mathematician Laplace added the adiabatic index (, which for air is just a simple constant equal to 1.4) to give the formulation known as the Newton-Laplace equation:

So how about light? Is there any way we can understand its propagation in a similar way? Newton had given us this mechanical understanding of sound waves and one of his contemporaries, Christian Huygens, had proposed that light was also a wave. Newton disagreed with him, but at the very beginning of the 19th century Thomas Young’s double slit experiment seemed to prove that it was so. In the middle of that century Maxwell’s equations revolutionised our understanding of electricity and magnetism, and the electromagnetic wave theory of light became firmly established with the speed of light in a medium given by:

It looks a little bit like our speed of sound equation, but only a little. Those squiggly Greek letters are (mu), the relative permeability of the medium, and (epsilon), the relative permittivity of the medium. These are big names for reasonably straightforward concepts, but we’ll have to go off on a little bit of a tangent to discuss them. We’ll come back to the central question in short order.

If you have ever studied Maxwell’s equations (and I’m not assuming you have) you’ll have come across and a lot. Permeability refers to the ability of a substance to become magnetised. Permittivity refers to its ability to become electrically polarised. Nowadays we know that polarisation is the result of separation between internal electric charges, usually the movement of an atom’s negative electron cloud with respect to its positive nucleus. But again, the existence of atoms and particles with electric charges wasn’t known at the time, so the complicated term permittivity – the susceptibility to electric polarisation – was invented.

Let’s take an example. Permittivity appears for instance in Gauss’s Law (one of Maxwell’s equations). This cryptic looking equation has a straightforward meaning. It says the electric field through a surface (which is related to voltage) will be equal to the amount of electric charge (Q) divided by the permittivity:

It’s possibly best explained by a real-world example. Electric capacitors store electric charge by having two oppositely charged conducting plates separated by an insulator or dielectric. We choose dielectrics with high permittivity because, according to Gauss’s Law, that gives us a lower voltage per unit of charge on the plates. Our aim is to store as much charge as possible for the lowest voltage (which costs us less energy to produce). For what should be obvious reasons, the relative permittivity of a material is also called its dielectric constant.

Now, the concept of relative permittivity begs the question: relative to what? It is relative to a quantity that we call , the permittivity of free space or permittivity of the vacuum. Likewise, relative permeability is relative to , the vacuum permeability.

This brings us back to the main story. None of this caused the 19th century physicists to bat an eyelid. It was already obvious to them that light, being a wave just like a sound wave, required a mechanical medium in order to propagate. The space through which light from the stars travelled to us must be filled with this medium, which the scientists named aether. and were just properties of this aether medium and the speed of light in that medium would be:

The bombshell came towards the end of the 19th century when the Michelson–Morley experiment disproved the existence of the aether. Further experiments at the start of the 20th century definitively ruled it out. How could this be? What does it even mean for a wave to propagate in the absence of a medium? The strange and ironic fact is that our very latest understanding of the vacuum in terms of quantum physics may answer that question in a somewhat old-fashioned way.

First let’s do an exercise in dimensional analysis. That means looking at the units in which something is measured in order to enhance our understanding of the physics. As I mentioned, permittivity arises in connection with various properties of the electric field in Maxwell’s equations. The vacuum permittivity was measured as = 0.00000000000885 farad per metre. I’m going to be using scientific notation, and negative exponents will be used for inverses, but it should be pretty clear:

Now, a farad is a unit of capacitance and I already explained that our modern understanding of capacitance is that it involves internal charge separation among the particles of a medium. But this leaves us with a conundrum if the aether medium doesn’t exist. Nevertheless, let’s plough on with our dimensional analysis.

The farad unit of capacitance says how much charge (in coulombs) we can put on each of our capacitor plates for each unit of energy (in newton metres) we provide. So:

A coulomb is a unit of electric charge, but it was never very easy to measure a quantity of charge directly. So instead the coulomb was defined to be the amount of charge transferred when an electric current of one ampere flowed for one second. We’re used to the fact that an ammeter can measure current, but how does it do it? In fact, we can’t measure current directly either, but we can detect a repulsive force between two parallel current carrying wires. That’s what produces the needle deflection in the ammeter. An amp is defined as the current that produces a force of one ten millionth of a newton for each parallel metre of wire. Thus:


So now we can write:

And since a newton is a force measured in kilogram metres per second squared, we can finally simplify to:

But wait now! Kilograms per metre cubed is a density! Of what? Apparently it’s the density of empty space!

Let’s look at magnetic permeability. The vacuum permeability is measured in henrys (a unit of magnetic inductance) per metre, which is equivalent to newtons (force) per square ampere (current):

Using the definition of amperes that we already looked at, this can be converted into:

This is the inverse of newtons per metre squared … force per unit area, or pressure! Pressure can also be written in units of , or energy per unit volume. Consider that the pressure of a gas is produced by the internal energy of the molecules. Anyway, we now have that the vacuum permeability is the inverse pressure or inverse energy density of space, while the vacuum permittivity is the conventional mass density. And if we rewrite our expression for the speed of light in those terms we get:

And this looks exactly like the Newton-Laplace equation for the speed of sound! The only thing is we are left scratching our heads over what it means to talk about the pressure and density of empty space. In the century since the Michelson-Morley experiment we’ve gotten used to talking about empty space as nothingness. Yet that would ignore a century of quantum physics. That tells us that the vacuum is actually a seething mass of virtual particles that come and go out of existence continually. On average they do have a certain amount of energy and mass and this is what we have just measured by converting the vacuum permeability and permittivity to appropriate units. We can think of them as the medium through which light propagates.

There is one more bonus revelation in store. We said that pressure is the same as energy density, so we can write:

So we just plucked the most famous equation of General Relativity out of our analysis too! Classical electromagnetism, quantum physics, and General Relativity, all hiding in the single phenomenon of light transmission.

(A real physicist would have a fit at the lack of rigour here, but I never claimed to be one of those :icon_biggrin:).


Grand oul’ stretch in the evenings! :icon_biggrin:

First post-5pm sunset for Dublin just happened. Won’t have an earlier one again till 28th October.

Spacex just launched batch four of the Starlink “internet in the sky” satellites. (We talked about batch two up thread around November/December). Each batch is 60 satellites and they initially are all clustered together but gradually spread out and move into a higher orbit. While they are clustered they make a very unusual sight in the evening or early morning sky. Like the ISS you can track them on Look out for times when there are several per minute, and the maximum elevation is 70 degrees or more. Don’t forget to set your location.

These are quite faint satellites, reaching not much below 3rd magnitude so you need a decent dark view of the sky. That said, I watched them last Sunday from a suburban Dublin back garden and had a spectacular view when 40 satellites chased each other across the sky in the space of nine minutes. That was batch three, which launched in early January, and I managed to catch a pass that went almost overhead through the zenith.

There is a possibility that batch four which launched this afternoon may be experimenting with a dark paint designed to minimise light pollution for astronomers – a very good thing considering there will be thousands of them eventually, but not good for anyone wanting to spot them. Heavens-above should have information on the magnitudes when they get the schedule up for batch four.

Meanwhile, the more familiar and easier to spot sight of the ISS is in our evening skies at the moment, with a very bright pass most evenings for the next while.

EDIT: on checking the schedule I see that the last decent appearance of batch three (for now, or maybe forever) is happening in the next half hour. From 17:44 to 17:52 forty will pass over, but it looks to me like it won’t be dark enough, and certainly where I am it’s clouded out. Nevertheless, if you have an exceptionally clear unpolluted sky, look high up to the south south west at that time. You need to give yourself at least ten minutes of dark adjustment time first.


As a small kid I conducted some investigations into convection using a helium balloon. I made a little cardboard basket to hang below it and then added weight to make it neutrally buoyant. During fine tuning I was adding just the tiniest scraps of paper to make it hang in mid air. Then I let it loose in the sitting room. We had no central heating back then, so the coal fire was the engine of convection. It was fascinating to see how active the air in a spot heated room was. The balloon which had been in perfect balance accelerated toward the ceiling, scudded along it with an audible scrunching sound toward a wall, where it descended and hopped along the floor toward the fireplace. By placing the balloon at different starting points I could build up a picture of what the invisible air was doing.

Convection is a messy affair. As an older nipper in science class we were told that a boiling pot convects up the middle and down the sides. Well it might have been that way when we heated them with a gas flame that was concentrated in the middle. But I can tell you it’s a lot more complicated on my induction hob which heats the base of the pot fairly evenly. The convection cells don’t really know where to go.

It’s a lot more complicated again when the convecting material is magnetised, the convective layer is 200,000 kilometres deep, and the convective source is curved and rotating and producing energy equivalent to a hundred billion one-megaton nuclear bombs going off every second! We’re talking about the Sun of course. The granulations that we see on the surface are the tops of convections cells the size of Texas. Astronomers have just produced the most detailed pictures to date of this tortured cloudscape, using the new Inouye solar telescope in Hawaii. The white streaks you can see at the edges of some cells are magnetic fields leaking out.


The images of the sun are really interesting. Are there further missions looking at the sun planned.


There are two missions currently underway. Both are still manoeuvring themselves into position. The Parker Solar Probe launched 18 months ago and will study the Sun’s corona – the extremely hot outer atmosphere. It’s already doing science but is still easing itself toward its closest approaches to the Sun, getting to under 20m kilometres later this year.

The ESA/NASA Solar Orbiter is a whole new departure. It launched last month and will look at the never-before-seen poles of the Sun. As with the other mission, it will take several years to get into position. There’s an excellent short youtube vid:


Worth a read.


Good read alright. The Voyagers were the first popular space mission I was properly aware of. (I dimly remember the earlier Vikings). The planetary grand alignment that made the Grand Tour possible caused a bit of consternation in the 70s. John Gribbin, an otherwise sober science populariser, had predicted in 1974 that the grand alignment would cause widespread earthquakes and disruption. I remember being at a program in the London Planetarium in the late 70s where they assured the audience that nothing untoward was going to happen. In fact, it’s been calculated that having all the planets line up on one side of the Sun raised a tide that was 0.04 millimetres higher than normal. :icon_biggrin: