Author Archives: zebra

Keep the date, and Vote

James Glover is back with another guest post, this time digging into some poll figures ahead of the postal plebiscite on same sex marriage.

Hey, there is a survey/plebiscite/referendum on, in case you haven’t heard. It’s on same sex marriage or marriage equality. Leaving aside the fact that this is a survey and not at all binding on MPs, this post is not about the rights and wrongs of SSM but about how to interpret the results of a recent Newspoll. Unlike most Western democracies voting in Australian elections is compulsory but as this is voluntary we are left with the additional problem for psephologists of determining not just how people would vote but whether they feel strongly enough to vote. The Newspoll produced two sets of results. The familiar one of whether people supported SSM or not, but also whether they intended to send in their postal surveys. Strangely enough they didn’t include information on the voting intentions of those who actually intend to vote.

So I made a spreadsheet model to try to determine some possible outcomes and what were the real drivers of the result based on what we gleaned from Newpoll but with some possibilities of one side or the other getting more people out to vote and the underlying vote being skewed towards the “Yes” vote. We know from dozens of polls going back 10 years that the majority of people, when asked, support the general notion of SSM. The results are usually in the range 60-70% in favour, 15-25% against and about 15% undecided. Newspoll has the overall level of support at 65%, about in the middle of that range. And if the ABS, who are conducting the survey, were to conduct a statistically significant poll they would almost certainly (the probability theorists technical get out statement) say a clear majority support it. Game over. Surely?

But there are other factors coming into play here. Here is a table of the Newspoll results by age, probably the most significant determinant, outside political views, of whether they support, or not, SSM.

Support for SSM by Age18-3435-4950-6465+Overall
AEC enrolled population 4,271,2894,271,2904,271,2914,271,29217,085,162

To determine the ”overall” figure, and what I will refer to as the “voting population”, I am using the AEC’s own figures on people enrolled to vote, as of June 2017, which is the last line.

As has been noted support for SSM decreases with age. But the number of people in each age cohort is about the same. The overall figure for support of 62% is towards the bottom end of most surveys but let’s leave it at that.

The Newspoll also provide figures on whether people actually intend to return their surveys.

Intention to vote18-3435-4950-6465+Overall
Definitely will vote5864737668
Probably will1916111114
May or may not129889
Probably won't45323
Definitely won't35323

One obvious thing to note is that older people, who are also more likely to vote “No” are more likely to vote. That will skew the results towards the “No” case.

But polls two months out may not reflect the final vote as happened in the recent US and UK elections, and support for the “Yes” case may soften. And the “No” case is probably doing a lot more to ensure they get as high a turnout as possible. So, in my model, on a spreadsheet of course, I included some assumptions and variable inputs which are:

  1. I assume all people who say they definitely will vote is 100%.
  2. “Probably will vote” is an input
  3. “May or may not vote” is an input
  4. Probably won’t, definitely won’t and uncommitted is set at 0%
  5. Turnout for the “No vote”. Based on the polulation figures the turnout, overall should be about 83%. So one input is the turnout for “No vote” assuming they make more of an effort to get their supporters out to vote. The turnout for the “Yes” vote is then deducted from this number to match the overall turnout, 83%, by age group so higher turnout for “No” automatically leads to a lower turnout for “Yes”.
  6. For people who will claim that the “Yes” poll result is exaggerated and is actually lower, or will soften closer to the closing date I have included an adjustment term. So “-5” means I have reduced the polled support for “Yes” by 5% making it 57% rather than 62%.
  7. Splitting the “undecided” vote between “Yes” and “No”. “P” means I have allocated it proportionally to the level of support, but there is a parameter which splits it, say, 25% to “Yes” and therefore 75% to “No”.

So the results? Well here they are:

% "probably will vote" who do vote75%75%50%25%
% "may or may not" who vote50%50%32%0%
"No" vote turnoutP95%95%100%
undecided split to "Yes" votePP30%0%
adjust yes vote00-5-5
Vote Yes66%62%50%40%
Vote No34%38%50%60%
Support Yes67%67%59%57%
Support No33%33%41%43%
Overall turnout84%84%78%71%
Yes turnout83%78%67%50%
No turnout85%95%95%100%
Population Yes vote56%52%39%28%
Population No vote28%32%39%43%

There is good news, and bad news, depending on your viewpoint. My own view is a “Yes” vote is a good thing but if you feel otherwise feel free to substitute “Best” for “Worst” in the above table. So here are the 5 scenarios. Note that once you fix the “No vote” intention to vote at, say, 95%, you remove people who intend to vote “Yes” in order to keep the Newspoll and AEC derived figure of 83% intending to vote.

  1. BCS – Best Case Scenario. Based on the Newspoll numbers I have split the intention to vote and undecided vote equally among “Yes” and “No” voters. I have also assumed 75% of the “Probably will vote” and 50%” of the May or may not” voters will vote. The result is a clear win 66:34 for the “Yes case”. Also the overall number of people voting “Yes” is 56% of the voting eligible population so hard to argue this isn’t a decisive result.
  2. Expected – I am assuming that the people on the “No” case will be better at getting people out to vote than the “Yes” case, 95% of them. Here there is still a clear win for “Yes” at 62%. And overall that represents 52% of the population. A clear win for “Yes” on the vote and over 50% of the population vote “Yes” as well.
  3. RWSC – Reasonable Worst Case Scenario. This is a term that I (and the Mule) picked up in our early days at DB to describe a scenario which assumes negative (from my point of view) parameters that could nonetheless be possible. Here I am assuming only 50% of probably wills and 33% of may or may not’s vote. Because I have fixed the “No” voting rate at 95% this leads to less “Yes” votes” to keep the overall participation rate at 83%. Here the result is a line ball at 50:50. It could go either way. The population “Yes” vote is close to 40% so people might argue that less than 50% vote “Yes” and hence conservative MPs shouldn’t take the result as definitive.
  4. WCS – Worst case scenario. Only 25% of the maybe votes and none of the may or may nots vote and 100% of the “No” votes do. I’ve reduced the support for the “Yes” case by 5%. All undecideds get allocated to “No”. Despite the overall support being 57:43 in favour of “Yes” the actual vote goes 40:60 in favour of “No”. And the overall population vote is 43:28 in “No”s favour. Under these circumstances the PM has said the vote for SSM won’t come to parliament. Largely this is driven by the 100% turnout for “No” and only 50% turnout for “Yes” as well as softening of support for “Yes” and undecideds voting “No”. This is the result the “No” campaign will be, literally in some cases, praying for as it will be difficult for the Opposition and proponents of SSM to argue the issue hasn’t been settled for the time being.

My own guess? It will be 55:45 in favour of “Yes” with overall support at 65:35. That will be enough for the anti SSM lobby to say support was never as high as the “Yes” camp claimed. But a win is a win and only the most devout glitter sellers won’t be running out of stock by Xmas.

Extra: How do I think the ABS should actually conduct this poll? Not by post for a start. There are 150 electorates and one of the arguments against using results from 1,400 people is that it barely samples many of those, less than 10 people is some cases. In actual fact the mathematical 95% margin of error for sampling N people is (approximately) 1/sqrt(N) or for N=1400, 2.67%. So the overall sample size is sufficient if the result is 60:40. But to give everyone the feeling their voice and their neighbour’s voice is being heard how about sampling 150,000 people? That is 800 people in Australia’s smallest electorate, Kalgoorlie. The MoE by individual electorates would be better than +-3.5% and over the whole Australian voting population 0.25%. And it would only cost $10m. It might even become a regular thing.

Sic Gloria in Transit on Monday

Has it really been so long since there was a post here on the Mule? It would appear so and my only excuse is that I have been busy (isn’t everyone?). Even now, I have not pulled together a post myself but am once again leaning on the contributions of regular author, James Glover.

From pictures of the transit of Mercury you might think that Mercury is really close to the Sun and that is why it is so hot that lead is molten! In actual fact Mercury is about 0.4 Astronomical Units (AUs) from the Sun (Earth is about 1AU) and only receives about a 7 fold increase in sunlight intensity. So it is hot but not that hot. Mercury is about 40 solar diameters from the Sun. If the Sun were a golf ball then Mercury would be about 6 feet away and the Earth about 15 feet away. On Mercury the Sun subtends an arc of 1.4 degrees compared to 0.6 degrees on Earth.

Mercury Transit

Pictures of the Moon in front of the Earth seem to have the same effect, to me at least, of making it look much closer than it is, whereas in reality the Moon is about 30 Earth diameters away. Roughly the same “size of larger body to distance of smaller one” ratio as Mercury is from the Sun.

Moon in front of the Earth

This optical effect (modesty prevents me from giving it a name) seems to occur when photographing one astronomical body over another. It can’t be that we are using the relative sizes as a proxy for distance since Mercury/Sun is very small and Moon/Earth is relatively large. Lacking other visual clues, that a terrestrial photograph might provide, my guess is that we use the diameter of the larger body as a proxy for the distance from the smaller one. Mentally substituting  “distance across” for “distance from”. Or maybe it’s just me?

One possible explanation is that there is insufficient information in a 2D photo like this to determine the distance between the objects. But if asked “how far do you think the one in front is from the one behind?” rather than say “I can’t tell”, you choose one of the two pieces of metric information available, or some function of them, such as the average. Perhaps the brain is hardwired to always find an answer, even a wrong one, rather than admit “I don’t know”, “I have no answer” or “I have insufficient information to answer that question, Captain”. That would explain a lot of religion and politics.

Direct Action

It has been a very long time since there has been a post here on the Stubborn Mule. Even now, I have not started writing again myself but have the benefit of a return of regular guest poster, James Glover.

This is a post to explain the Australian Government’s policy called “Direct Action”. I will spare you the usual political diatribe. So here is how it works. The government has $3bn to spend on reducing carbon emissions. At a nominal cost of $15/tonne that could be 200m tonnes of Carbon.

Okay so how does it work? The government conducts a “reverse auction” in which bidders say: “I can reduce carbon emissions by X tonnes at a cost of $Y per tonne”. You work out what is the biggest reduction for the least cost. You apportion that $3bn based on the highest amount of carbon reduction. Easy peasy. That $3bn comes from government spending so ultimately from taxpayers. [Editor’s note: while not directly relevant to the direct action versus trading scheme/tax discussion, I would argue in true Modern Monetary Theory style that the Australian government is not subject to a budget constraint, beyond a self-imposed political one, and funding does not come from tax payers].

As our new PM Malcolm Turnbull says why should you have a problem with this? There is a cost and there is a reduction in carbon emissions. There will always be a cost associated with carbon reduction regardless of the method so what does it matter if this method isn’t quite the same as a Carbon Pricing systems previously advocated by the PM and his Environment Minister Greg Hunt? As long as there is a definite amount, Xm tonnes reduced.

Well here are a few thoughts:

1. if a company is currently making a profit of, say, $500m a year, producing electricity using coal fired power stations then why would they participate in this process? There is no downside. Maybe.

2. Okay it is a bit more subtle than that. Suppose the difference between the cost of producing electricity using coal or renewables works out at $15 a tonne. You might reasonably bid at $16/tonne. In reality there is a large upfront cost of converting. There is a possibility that an alternative energy provider takes that $15/tonne and uses it to subsidise their electricity cost. That could work. That encourages a coal based provider to move to renewables. But so might a coal based electricity provider at $14/tonne to undermine them. What we call a “race to the bottom”.

3. It seems to be an argument about who exactly pays for carbon pollution. Well here is the simple answer: you pay. Who else would? And you pay because, well, you use the electricity.

4. There is no easy answer to this. Which approach encourages more electricity providers to move to renewables? That is hard to say. Every solution has its downside. I decided while writing this I don’t actually care who pays. As long as carbon is reduced.

I started out thinking Turnbull was just using the excuse “as long as it works who cares?” but I have moved to the view that it doesn’t matter. All carbon reduction schemes move the cost onto the users (of course). There are many subtleties in this argument. I personally think a Cap and Trade system is the best because in a lot of ways it is more transparent. But in the end, as PM Turnbull says who cares, as long as carbon is reduced. Presumably as long as that is what really happens, eh?

Bringing Harmony to the Global Warming Debate

For some time now, our regular contributor James Glover been promising me a post with some statistical analysis of historical global temperatures. To many the science of climate change seems inaccessible and the “debate” about climate change can appear to come down to whether you believe a very large group of scientists or a much smaller group of scientists people. Now, with some help from James and a beer coaster, you can form your own view.

How I wish that the title of this article was literally true and not just a play on words relating to the Harmonic Series. Sadly, the naysayers are unlikely to be swayed, but read this post and you too can disprove global warming denialism on the back of a beer coaster!

It is true, I have been promising the Mule a statistical analysis of Global Warming. Not only did I go back and look at the original temperature data but I even downloaded the data and recreated the original “hockey stick” graph. For most people the maths is quite complicated though no more than an undergraduate in statistics would understand. It all works out. As a sort of professional statistician, who believes in Global Warming and Climate Change, I can only reiterate my personal  mantra: there is no joy in being found to be right on global warming.

But before I get onto the beer coaster let me give a very simple explanation for global warming and why the rise in CO2 causes it. Suppose I take two sealed glass boxes. They are identical apart from the fact that one has a higher concentration of CO2. I place them in my garden (let’s call them “greenhouses”) and measure their temperature, under identical conditions of weather and sunshine, over a year. Then the one with more CO2 will have a higher temperature than the one with less. Every day. Why? Well it’s simple: while CO2 is, to us, an “odourless, colourless gas” this is only true in the visible light spectrum. In the infra-red spectrum, the one with more CO2 will be darker. This means it absorbs more infrared radiation and hence has a higher temperature. CO2 is invisible to visible light but, on it’s own, would appear black to infrared radiation.  The same phenomenon explains why black car will heat up more in the sun than a white one. This is basic physics and thermodynamics that was understood in the 19th century when it was discovered that “heat” and “light” were part of the same phenomenon, i.e. electromagnetic radiation.

So why is global warming controversial? Well, while what I said is undeniably true in a pair of simple glass boxes, the earth is more complicated than these boxes. Radiation does not just pass through, it is absorbed, reflected and re-radiated. Still, if it absorbs more radiation than it receives then the temperature will increase. It is not so much the surface temperature itself which causes a problem, but the additional energy that is retained in the climate system. Average global temperatures are just a simple way of trying to measure the overall energy change in the system.

If I covered the glass box containing more CO2 with enough aluminium foil, much of the sunshine would be reflected and it would have a lower temperature than its lower CO2 twin. Something similar happens in the atmosphere. Increasing temperature leads to more water vapour and more clouds. Clouds reflect sunshine and hence there is less radiation to be absorbed by the lower atmosphere and oceans. It’s called a negative feedback system. Maybe that’s enough to prevent global warming? Maybe, clouds are very difficult to model in climate models, and water vapour is itself a greenhouse gas. Increasing temperature also decreases ice at the poles. Less ice (observed) leads to less radiation reflected and more energy absorbed. A positive feedback. It would require a very fine tuning though for the radiation reflected back by increased clouds to exactly counteract the increased absorption of energy due to higher CO2. Possible, but unlikely. Recent models show that CO2 wins out in the end. As I as said, there is no joy to being found right on global warming.

So enough of all that. Make up your own mind. Almost time for the Harmony. Perusing the comments of a recent article on the alleged (and not actually real) “pause” in global warming I came across a comment to the effect that “if you measure enough temperature and rainfall records then somewhere there is bound to be a new record each year”. I am surprised they didn’t invoke the “Law of Large Numbers” which this sort of argument usually does. Actually The Law of Large Numbers is something entirely different, but whatever. So I asked myself, beer coaster and quill at hand, what is the probability that the latest temperature or rainfall is the highest since 1880, or any other year for that matter?

Firstly, you can’t prove anything using statistics. I can toss a coin 100 times and get 100 heads and it doesn’t prove it isn’t a fair coin. Basically we cannot know all the possible set ups for this experiment. Maybe it is a fair coin but a clever laser device adjusts its trajectory each time so it always lands on heads. Maybe aliens are freezing time and reversing the coin if it shows up tails so I only think it landed heads. Can you assign probabilities to these possibilities? I can’t.

All I can do is start with a hypothesis that the coin is fair (equal chance of heads or tails) and ask what is the probability that, despite this, I observed 100 heads in a row. The answer is not zero! It is actually about 10-30. That’s 1 over a big number: 1 followed by 30 zeros. I am pretty sure, but not certain, that it is not a fair coin. But maybe I don’t need to be certain. I might want to put a bet on the next toss being a head. So I pick a small number, say 1%, and say if I think the chance of 100 head is less than 1% then I will put on the bet on the next toss being heads. After 100 tosses the hypothetical probability (if it was a fair coin) is much less than my go-make-a-bet threshold of 1%. I decide to put on the bet. It may then transpire that the aliens watching me bet and controlling the coin, decide to teach me a lesson in statistical hubris and make the next toss tails and I lose. Unlikely, but possible. Statistics doesn’t prove anything. In statistical parlance the “fair coin” hypothesis is called the “Null Hypothesis” and the go-make-a-bet threshold of 1% is called the “Confidence Level”.

Harmony. Almost. What is the probability that if I had a time series (of say global temperature since 1880) that the latest temperature is a new record. For example the average temperature in Australia in 2013 was a new record. The last average global temperature record was in 1998. I think it is trending upwards over time with some randomness attached. But there are all sort of random process which produce trends, some of which are equally likely to have produced a downward trending temperature graph. All I can really do, statistically speaking, is come up with a Null Hypothesis. In this case my Null Hypothesis is that the temperature doesn’t have a trend but is just the result of random chance. There are various technical measures to analyse this, but I have come up with one you can fit on the back of a beer coaster.

So my question is this: if the temperature readings are just i.i.d. random processes (i.i.d. stands for “independent and identically distributed”) and I have taken 134 of these (global temperature measurements 1880-2014) what is the probability the latest one is the maximum of them all? It turns out to be surprisingly easy to answer. If I have 134 random numbers then one of them must be the maximum. Obviously. Since they are iid I have no reason to believe it will be the first, second, third,…, or 134th. It is equally likely to be any one of those 134. So the probability that the 134th is the maximum is 1/134 = 0.75% (as it is equally likely that, say, the 42nd is the maximum). If I have T measurements then the probability that the latest is the maximum is 1/T. So when you hear that the latest global temperature is a maximum, and you don’t believe in global warming, then be surprised. As a corollary if someone says there hasn’t been a new maximum since 1998 then the probability of this still being true, 14 years later, is 1/14 = 7%.

So how many record years do we expect to have seen since 1880? Easy. Just add up the probability of the maximum (up to that point) having occurred in each year since 1880. So that would be H(T) = 1 + 1/2 + 1/3 + … + 1/T. This is known as the Harmonic Series. It is famous in mathematics because it almost, but doesn’t quite converge. For our purposes it can be well approximated by H(T) =0.5772+ ln(T) where ln is the natural logarithm, and 0.5772 is known as the Euler-Mascharoni constant.

So for T=134 we get from this simple beer-coaster sized formula: H(134) = 0.5772+ln(134)= 5.47. (You can calculate this by typing “0.5772+ln(134)” into your Google search box if you don’t have a scientific calculator to hand). In beer coaster terms 5.47 is approximately 6. So, given the Null Hypothesis (which is that there has been no statistically significant upward trend since 1880) how many record breaking years do we expect to have seen? Answer: less than 6. How many have we seen: 22. 

Temperature peaks

Global temperatures* – labelled with successive peaks

If I was a betting man I would bet on global warming. But there will be no joy in being proven right.

James rightly points out that the figure of 22 peak temperatures is well above the 6 you would expect to see under the Null Hypothesis. But just how unlikely is that high number? And, what would the numbers look like if we took a different Null Hypothesis such as a random walk? That will be the topic of another post, coming soon to the Stubborn Mule!

* The global temperature “anomaly” represents the difference between observed temperatures and the average annual temperature between 1971 and 2000. Source: the National Climate Data Center (NCDC) of the National Oceanic and Atmospheric Administration (NOAA).

Where Have All The Genres Gone?

The Mule has returned safely from the beaches of the South coast of New South Wales. Neither sharks nor vending machines were to be seen down there. We did, however, have a guest drop in. none other than regular blog contributor, James Glover. The seaside conversation turned to music and James has distilled his thoughts for a blog post.

It seems timely to have a post with titular reference to the classic ’60s folk protest song “Where Have All The Flowers Gone” written by Pete Seeger, who died this week at 92. But I have been thinking about this question for a while. Not really as a music question but a classification question. (If you are reading this in a pub you might like to take a beer coaster and have a competition with a friend to write down as many musical genres in 10 minutes as you can think. I assure you an argument will follow).

Humans have an enormous tendency to classify things but often on closer inspection these turn out to be imprecise or just wrong. History shows many examples. The classification of the Living Kingdom has gone from two (Plants and Animals) to five. Eukaryotes:Animals, Plants, Funghi and Protistas (e.g. algae); and, separately, Prokaryotes (no separate nucleus). The latter has been since split by some biologists into Bacteria and Archae (e.g. extremophiles). In addition, for example, we can’t agree on the number of continents versus large islands.

The point here is that what at first seemed like a very obvious and useful distinction becomes, as time passes, less distinct and may actually hinder further understanding or be proved wrong and discarded. For example in physics the early 20th century Atomic Model of electrons, protons and neutrons has been replaced by the Standard Model of which only the electron (of which there are now three types) has survived, and protons and neutrons consist of quarks and gluons, as well as neutrinos and Higg’s particles. The racial classification of the 19th century, highly problematic now (so much so we don’t use two of the original terms) but seemingly obvious at the time: Caucasians (Whites), Negroids (Blacks), Mongoloids (Asians), has similarly been shown by scientists to have no significant genetic basis. The term “intersex” (now an official gender classification in some countries in Europe, and Australia) denies the classic (and so apparently “obvious” it really didn’t need explaining or justifying until recently) binary gender classification of male/female.

There are, naturally, two types of “genreism”. The first is based on evolution and radiation from one or a small number of original sources . In biology the classification was originally based on form and function, called “cladism”, whereas now it is based on genetic lineage. This for example, is why birds are now classed as “avian dinosaurs” whereas when I was a child in school we learnt the vertebrates (animals with a backbone) were split into mammals, fish, birds, amphibians and reptiles. The second type of genreism is based on differentiation within coincidentally existing groups eg, fundamental particles, they all arose spontaneously (in the Big Bang in this case) rather than evolved from a single particle (or did they?). Ok, I guess there is also a third type of genre as well, which combines both, such as music or continents where the genres can arise spontaneously and then also evolve and split, or even combine. Oh dear.

Back to the music though. In another era circa 1987 I idly wondered if there was room for any more music genres. Trying to imagine a major new musical genre is pretty impossible with my level of musicality but towards the end of a decade that had given us New Romantic and HiNRG I thought maybe it had all been done. Turns out I was a little wrong, as we were soon to see the explosion of Techno/House/Rave music, HipHop and then in the 90s Grunge and Drum’n’Bass. Of course these are arguably not major new genres in the way that Punk and Disco were in the 70s. House music is Electronica (as is Drum’n’Bass) while Grunge is just Garage which itself is Rock music. HipHop is an extension of Rap. A quick search of “Electronica” on Wikipedia reveals several dozen sub genres which would be virtually indistinguishable to non aficionados or experts.

The point I’d really like to make (and I have asked this question online for several years to no avail) is why haven’t there been any new genres since before 2000?

So before considering that question what exactly is a “musical genre”? Given they are quite different, by definition, finding something they have in common doesn’t help. I guess they have different expressions of the following four components:

  1. Instruments, including vocals
  2. Beats
  3. Production/Arrangements
  4. Image

I am no musicologist so this list may not be exhaustive or even the right way to look at it. I added “image” because a lot of allegedly different musical styles at different times really sound quite similar if your remove the clothing and image. Like taste in food, taste in music can be largely down to looks. This is particularly true for Pop. But when it comes to genres it is very much “I don’t know what it is but I’ll know it when I hear it”. Which also means that unless you are “into” say electronica or metal or jazz it may all sound pretty much the same.

So what are the musical genres? You can find various lists on the internet including this graphically useful presentation of genres through time, but here is my list. I have included genres which are derived from the first in the list in brackets but often they are significant (more significant in the case of Disco) that their progenitor. I have also not listed what I consider to be “sub-genres” like Nu Metal, Trip Hop, New Electronica etc. These, arguably, come under derivations, deviations and revivals.

Gospel (Jazz, R&B, Soul)
Blues (R&B, Soul, Rock)
Rock (Folk, Psychedelia, Heavy Metal, Prog Rock Glam, Reggae, Punk, Indie, Garage, Grunge)
Electronica  (Techno, House, Rave, Drum’n’Bass, Chillout)
Rap (Scratch, Hip Hop)
Pop (Folk/Protest, Country & Western, Easy Listening, Indie, New Romantic, World Music, Lounge)
Funk (Disco, HiNRG, Techno, House)

It is not entirely linear of course, Disco (Bee Gees) clearly has more or less elements of Glam (early Bowie) and Funk (Sly Stone) depending if you are in Europe or America. I always thought Blondie was a Pop band, not a Punk (Sex Pistols, Ramones) band as they are often described in the U.S. Pop also contains a myriad of related styles with an emphasis on simple melodies and arrangements, though there are notable exceptions but even when (as in ABBA or Crowded House) the arrangements are actually quite complex they still sound quite simple to most listeners. Indie used to be based on relentlessly non-commercial music (Nick Cave but pick your own favourite who never had a top 40 hit, at least until they sold out) until R.E.M. crossed over and maintained both critical and commercial success. Before R.E.M. it was considered a truism that you could only have one or the other and Indie bands which later achieved major commercial success (Smashing Pumpkins) had invariably “sold out” and “lost cred” in the eyes of their early fans.

So maybe the answer is that there is no longer a need for musical genres. There is certainly plenty of “new” music. And as DIY production becomes possible due to advances in technology and the internet means people no longer need listen to a single local FM radio station which promotes particular bands and genres then the very notion of genre becomes less useful. This is not unprecedented, modern movements in the visual arts (Impressionism, Cubism, Dada, Surrealism, Abstract Expressionism) also have disappeared since the 60s when Pop Art (Warhol), Conceptual Art (Yoko Ono) and Street Art (Basquiat) finished them off. These days many artists work in multiple genres (Australia’s Patricia Piccinini is one) and the concept of “Art Movement” itself, which so majorly defined much of Art History (and coffee table Art Books) is now redundant.

So saluting folk/rock pioneer Pete Seeger maybe it’s time to put classification systems, for music at least, behind us and just recognise genres were “a long time passing” but now they’re a “long time gone”. (I should also point out that there are two types of people in the world, those who like classifying things, and those who don’t.)

Power to the people

Regular Mule contributor, James Glover, returns to the blog today to share his reflections on solar power.

I have been investigating solar power for years and finally bit the bullet and signed up for a system. A 4.5kW system cost me $8,500, after receiving the Government rebate (about $3,000). I’ve been meaning to write about my adventures in solar for a while now. It started because of a strange fact I discovered about 4 years ago. Even though the cost of solar cells has been dropping dramatically in the last 4 years (it’s gone down about 75%) the payback time has stayed steady at about 5-10 years. The payback time is based on what you save by not paying your power bills plus what you earn by selling electricity back into the grid. The peak time for solar generation is 10am-2pm while the peak time for domestic use is in the morning and evening outside these times.

The answer to my conundrum is that while the cost of solar cells has been steadily dropping so has the feedback tariff. When the feedback tariff was 60c per kWh, the excess power created during the day paid for the disparity in the price of the power consumed in the evening. In Victoria the feed in tariff has dropped to about 8c. In order to have a net zero cost of solar it is necessary to have an even bigger system as peak power cost is about 32c per kWh. A particularly good website I found for all things solar is SolarQuotes.  I thoroughly recommend it as has lots of info on solar power as well as cost benefit analysis. They recommended two solar companies in my area, both of who were very good.

From a financial point of view it makes sense that power companies would buy solar power at a lower rate than they would provide it‑it’s called the bid/offer spread and is how most companies make money. The cost of producing power is about 5c so it is still cheaper for them to produce and sell the power themselves than buy it from solar power generators.

There is a twist to this tale however. Electricity generators are monopolies and so left to their own devices would naturally gouge buyers. When the state governments privatised electricity generation they set up supervisory boards to ensure the companies made reasonable, but not immodest profits. In the absence of a competitive market one way to do this is on a “cost plus” basis: set the profit at say 10% above cost of electricity generation. It seemed reasonable until power companies found a way to game this system. If they increased the cost of providing electricity then they increase their profits.

But surely, you say, the costs of generating electricity are based on market forces for the raw materials plus the cost of running the plant? One way is to spend much more on investment than is actually necessary. And the electricity companies did this beautifully. They convinced the state government oversight bodies that not only was electricity consumption forecast to rise well above GDP growth but that existing infrastructure needed to be “gold plated”: improved to reduce the probability of a widespread failure. A combination of inflated growth predictions (and hence building new plants) and gold plating is the real reason electricity prices have risen 20% year on year over the last few years. Yes, the carbon tax has had a small effect as a one-off increase. The Coalition (now the Government) exploited this in the run up to the election, although I am pretty sure this not was the real reason the Labor government lost office.

If you take solar power growth into consideration then electricity generation from traditional sources such as coal and hydro is expected to fall, not rise. Gold plating (soon to include actual gold power lines…I think I am joking) is now seen for what it is and is being reined in.

One of things I have always wondered is why someone doesn’t set up a virtual power company which buys solar power and sells it to distributors? Turns out they already exist. The thing which swung me to the solar provider I chose (the price was identical to the others) was that they could hook me into just such a company.  Sunpower is a US company which has set up in Australia to do just this. Currently their feed in tariffs are higher (guaranteed 20c for 2 years as opposed to 8c for coal generating providers) though I have no expectation they will remain this high. Australian Diamond Energy is another example of a virtual power company. Diamond Energy buys green power from retail solar producers (i.e. you and me) as well as independent wind farms. They also invest in their own larger scale solar and wind farms. Market forces will dictate the future price and I am happy to offset the environmental cost of running my air conditioning at full bore over summer.

In the US they already have communities which set up solar farms to provide their bulk electricity and sell their excess to the grid. Old style electricity companies have resorted to making claims that there are problems with solar electricity, either because it’s at the wrong time of the day or because old style inverters produce modified sine waves from direct current rather than pure sine waves and some electrical appliances don’t operate as well with these modified sine waves. Increasingly though inverters are of the pure sine wave type anyway. While there is some truth to their arguments, it is worth remembering that power companies would prefer that there was no solar at all. They have an axe to grind, their arguments are designed to limit the onward march of solar, or totally compensate them for lost revenue which will achieve the same aim through higher solar costs or lower feedback tariffs.

Another example of why traditional power companies are increasingly out of touch is smart meters. Solar power companies, monitor power usage through smart meters and solar panel output monitoring.They then provide feedback directly to your table or smart phone, and also work to help users optimise their power usage and minimise costs. Traditional power companies see smart meters as purely a way to save on meter reader costs, they have no interest in reducing users’ power consumption.

It seems that in Australia, the “sunburnt country” we have missed a few tricks. The dinosaur coal-based power companies are fighting a rearguard action, trying to get governments to lower the feed-in tariff further or let them charge solar customers a fixed fee to cover their “costs”. I think they are on the wrong side of history. A consumer group Solar Citizens has already been effective in reminding governments that over 1m households have solar power. I think that 1m is a tipping point.

There are about 8m households in Australia. At a cost of about $5,000 we could make each a net producer of electricity for $40bn.  About the cost of the NBN. A new national Snowy River Scheme!

Power to the people. From the people. For the people.


Somehow September has passed by without a single post. During that time, the Mule has travelled to the other side of the world and back (primarily for a one day workshop in Switzerland). Also, James Glover (regular contributor to the blog) and I have been exploring the statistical significance of global temperatures. That will, eventually, crystallise into a future post but in the meantime James has been driven to reflect on cats rather than climate.

There are, apparently, two kinds of people. Those who like cats and those who don’t have personalities. I am of the former and am onto my 5th and 6th cats (a mother/daughter pair of rescue cats). I’ve been reading (another) book on cat behaviour which traces the domestication of the cat from solitary hunters to domestic pets (John Bradshaw’s Cat Sense: The Feline Enigma Revealed). Most domesticated animals are herd beasts whose natural behaviours lend them to domestication. A really great read on this is Jared Diamond’s Guns, Germs and Steel. Cats, however, are naturally solitary creatures whose real benefit to humans became obvious when agrarian societies stored grains which attracted rodents, the cat’s natural food source. It’s hard to imagine now, when we get our daily bread from Woolies, but think back to the day when farmers were (literally) plagued by mice and rats, and cats served to control them.

As a kid growing up in suburban Townsville we had an un-neutered tom cat called Whiskey. We weren’t allowed to play with Whiskey, and I have vague memories of him bringing home litters which lived briefly under the house and my mother throwing him the occasional piece of liver on the back steps. He wasn’t what you would call a friendly cat. As an 8 year old we moved and I recall driving with my father to take Whiskey to a “cat home”. I still have an image of dozens of cats climbing up the side of a large wire cage. I am guessing Whiskey didn’t last there for long, and, of course was happily re-homed with another loving family. Yes, that’s what happened.

Almost every website on cats says not to feed them cow’s milk because adult mammals don’t produce lactase, the enzyme required to break down lactose in milk, into sugar. Mammals stop producing lactase once they are weaned because their mothers no longer provide them with milk and they instead produce enzymes which turn proteins, in animal and vegetable matter, into sugars. Producing lactase would be pointless and require resources better devoted to other enzymes and hence has been adapted against. The idea is that if cats can’t digest lactose, it stays in their gut and bacteria feeding on it leads to an upset stomach and diarrhoea. But I see several problems with this view.

  1. Humans can produce lactase as adults*, due to a variety of different genetic mutations which stop the shutdown of lactase production in adults. So the genetic mutation doesn’t have to suddenly find a way to produce lactase, just a way to stop stopping it. Basically this is because of the nutritional benefits of cow’s milk to dairy farmers which started about 10,000 years ago. Comparisons of 10,000 year old human DNA to modern descendants of dairy farmers show this is a widespread adaptation due to its obvious nutritional benefits. Indigenous Australians and Inuit don’t have this mutation because they have no dairy farmer ancestors. This is still an open question however as curdled milk and cheese doesn’t have much lactose so do not require lactase to digest them. Personally I suspect that hunters which killed a lactating cow were able to drink the milk immediately and benefited. Other theories say cow’s milk, as an alternative to water, may have saved them from diseases. Not all humans can do this. My own father, for example, can’t drink milk.
  2. Cats are quickly put off foods that make them feel sick and my cats love milk. It’s possible there is something in milk which they love (like cat nip) even if it makes them sick, but they are quick learners and I doubt it.
  3. There is a lack of eye witness evidence from vets and catteries back in the day when cats were fed milk that they suffered diarrhoea when they drink milk. But none of the evidence against cats drinking cow’s milk seems to be based on this. I’ve not found a single account of someone whose cats were fed cow’s milk and suffered.
  4. Cats have adapted to human living rapidly in the last 2-3 thousand years. This is equivalent to 4-5 times the length of time for humans due to their shorter lifespans, about the same time humans have adapted to drinking milk as adults.
  5. It makes sense that cats which were given milk by humans, and could process it, would have a better chance of reproducing. It would have a nutritional advantage over cats which couldn’t, the same evolutionary pressure on humans should operate on cats and they should (most of them anyway) have adapted to being able to drink milk as adults.
  6. I can’t find a single study which shows cats can’t produce lactase as adults, it just seems to be assumed because they are non-human mammals.

My guess is that cats descended from European cats can (most of them anyway) drink cow’s milk safely. If they drink it and come back for more it probably doesn’t upset them. My own cats, when they drink milk, run around like kids on sugary drinks, displaying very kittenish behaviour. That makes me think they are turning lactose into sugar, which means they are still producing lactase as adults.

I still find it quite amazing how memes like “cats shouldn’t drink milk” propagate across the internet without any back up evidence–like an actual study which shows it. Like climate skeptics, cat people latch onto “evidence” which supports their point of view. In any event if anyone has firm evidence that adult cats don’t produce lactase I would be happy to hear about it.


Two cats both called Minoo because cats don’t actually know their names

* Editor’s note: a recent episode of Science Friday touched on this and other evolutionary changes in the human diet. The theme of the podcast is that humans are still evolving, faster than ever. So, perhaps cats are too, as James suggests.

Poll Dancing

With elections looming, and Kevin Rudd’s return to power, it is time for our regular guest blogger, James, to pull out his beer coaster calculator and take a closer look at the polls. 

It is really that time again. Australian election fever has risen. Though in this case it feels like we have been here for three years since the last election. Polls every week telling us what we think and who we will vote for. But what exactly do these polls mean? And what do they mean by “margin of error”?

So here is the quick answer. Suppose you have a two party election (which two party preferred, 2PP, effectively amounts to through Australia’s preference system). Now suppose each of those parties really has 50% of the vote. If there are 8 million voters and you poll 1,000 of them then what can you tell? Surprisingly it turns out that of these inputs the number of 8 million voters is actually irrelevant! We can all understand that if you only poll 1,000 voters out of 8 million then there is a margin of error. This margin of error turns out to be quite easy to compute (using undergraduate level Binomial probability theory) and only depends on the number of people polled, and not the total number of voters. The formula is:

MOE = k × 0.5 /√N.

where N is the number of people polled and k is the number of standard deviations for the error. The formula √1000 = 33 so 1/√1000 = 0.03 = 3%. The choice of k is somewhat arbitrary but in this case k = 2 (because for the Normal distribution 95% of outcomes lies within k=2 standard deviations of the mean) which conveniently makes k × 0.5 = 1. So MOE=1/√N is a fairly accurate formula. If N=1000 then MOE=1/33=3% (give or take). This simply means that even if the actual vote was 50:50 then 5% of the time, an unbiased poll of 1,000 voters would poll outside 47:53 due purely to random selection. And even if the actual vote is, say, 46:54, the MOE will be about the same.

Interestingly in the US where there are about 100m voters they usually poll at N = 40,000 which makes the MOE = 0.5%. In this case the economics of polling scale as the number of voters hence they can afford to poll more people. But the total number of voters, 100m or 10m, is irrelevant for the MOE. As the formula shows to improve the accuracy of the estimate by a factor of 10 (say from 3% to 0.3%) they would need to increase the sample size by a factor of 100. You simply can’t get around this.

One of the criticisms of polling is that that they don’t reach the same number of (young) people on mobile phones as older people on land lines. This is easily fixed. You just adjust the figures according to what type of phone they are using based on known percentages of who uses what type of phone. Similarly you can adjust by gender and age. The interesting thing though is that the further you get from actual phone usage/gender/age in your poll you also need to increase your MOE, but not your expected outcome.

Okay so that is it: MOE = 1/√N where N = number of people polled. If N = 1000 then MOE=3%. My all time favourite back of the beer coaster formula.

The recent jump in the 2PP polls for Labor when Kevin Rudd reassumed the PM-ship from about 45% to 49% were greeted by journalists as “Kevin Rudd is almost, but not quite, dead even”. I found this amusing as it could statistically have been 51%, within the MOE, in which case the headline would have been “Kevin Rudd is ahead!”. Indeed barely a week later he was “neck and neck” in the polls at 50:50. Next week it may be “51:49” in which case he will be declared on a certain path to victory! However within the MOE of 3% these results are statistically indistinguishable.

From my point of view, as a professional statistician, I find the way many journalists develop a narrative based on polls from week to week, without understanding the margin of error, quite annoying. Given the theory that if a politician has the “The Mo” (ie. momentum) it may end up helping them win when it is irresponsible to allow random fluctuation due to statistical sampling error to influence the outcome of an election. Unless of course it helps the party I support win.

NDIS and how many disabled people are there anyway?

Regular guest writer, James Glover, returns to the Mule today to look at the figures behind the proposed NDIS.

The National Disability Insurance Scheme (NDIS) is in the news again. A welcome development for people with disability and their carers and families…and friends and pretty much anyone else who cares about their fellow humans. It is not a platitude to say that disability can strike anyone at any time in their life and the stories of these people are truly moving and shaming, especially as we live in one of the richest countries in the world. Adults who are only provided with two assisted showers a week and parents providing 24/7 care to profoundly disabled children but who cannot afford a new specialised wheelchair because there is limited funding for such things (wheelchairs cost from $500 for the basic models, of which I have two, and range up to $20,000 or more). In August 2011 The Productivity Commission reported on and recommended the NDIS and since then pretty much everyone agrees it is a good idea if we could only agree how to fund it.

So what does it replace? Currently most people with serious disabilities that prevent them from, inter alia, working, can receive the disability support pension (DSP). A small number will have insurance payouts if they were “lucky” enough to to have someone else to blame for their disability. In addition, anyone can receive a rebate on medications in excess of about $1,200 a year and, of course, access to (not quite free) public health care. On top of that, there are concession cards for public transport and a taxi card system which provides half-price taxi fares to partially make up for many disabled peoples inability to use public transport. The DSP does not depend on a specific disability and for a single adult over 21 with no children it is about $19,000 a year. For child under 18 who is living at home it is about $9,000 a year. While this would appear enough to live on (forgetting overseas holidays or a mortgage) most such people rely on additional support services for everything from basic medical equipment to respite for carers. There are currently 820,000 people, about 4% of the population, on the DSP. The Productivity Commission estimates 440,000 people on the NDIS so most of these will not be eligible for the NDIS but may still receive the DSP. People 65 and over of pensionable age are not eligible for the DSP and will not be eligible for the NDIS.

The purpose of the NDIS is to provide funding for care in line with the specific requirements of the recipients, and will mean additional support to the DSP for some. You can read more about it at Unlike the DSP, it isn’t a fortnightly stipend or, like standard disability or employment insurance, a lump sum. The government is planning to roll out pilot programs in many regions in the next few years, aiming for a complete national program by 2018-19. I won’t go into the politics but it seems even politicians can feel shame and  bipartisan support for the NDIS is emerging with a good chance of a bill through this parliament in the next few weeks. The total cost of the NDIS is often quoted as $18bn a year. Some funding is proposed from an additional 0.5% to the Medicare levy. Other funding wil come jointly from the federal government and the states. The proposed levy will raise about $3.8bn a year, so nowhere near enough for the full cost. If you subsume the half the DSP cost of $11bn a year that (only) leaves an outstanding amount of $8-10bn a year to be funded even with the Medicare Levy. Hopefully with bipartisan support the full NDIS will be implemented sooner rather than later.

So that’s the background on the NDIS. The real purpose of this article though is to consider the question “How many disabled people are there in Australia anyway?”.

Well that’s easy, just read any article on disability–for instance this one by disability advocate and media personality Stella Young–and you’ll be told the answer: 20%. 20%. 20%! I am a huge admirer of Stella Young’s work, so don’t get me wrong if I choose to disagree with her on this. The 20% figure gets quoted so frequently it must be true. Well maybe. People questioning this figure are directed to the 2009 ABS Census report on disability where the self-reported disability figure is 18.1% (+/-1.3%). So a round 20% is not too bad, right? Well like all statistics, the details are important. Firstly this includes people of all ages and, not surprisingly, many more older people have disabilites. From 40% at 65-69 to 88% at 90+. For those under 65 the figure is 13.2%. It increases with age and, in the 45-54 age group, is about the average 18%. Anyway why does it matter if the true figure is overstated? Well one reason is that while there is widespread support for the NDIS, the one concern that keeps coming up is who is eligible.

According to the Productivity Commission report they estimate 440,000 people on the NDIS of whom 330,000 would be disabled, and the rest made up of carers and people on preventative programs.

This report has a deeper analysis, which takes the figures at face value. It also includes breakdowns by disabling condition. I have paraphrased these in the following table based on some of the major causes of disability. And look, there are those perennial favourites of those who think all disabled people are really bludgers: back problems,stress and depression, making up about 18% of the total. Not quite bankrupting the country then.

Disability table 1

But what constitutes disability? It is basically a lack of normal activity rather than a set of diseases per se. The ABS report has 5 activity based categories, four of which are based on “restrictions on core activities: communication, mobility, self care”. There are “profound”, “severe”, “moderate” and “mild” levels of disability. A fifth category is  “schooling or employment restriction”, but overlaps with the first four. Here is a table with the breakdown by category and age group. Combining those with a core activity limitation with employment/school limitations the figure is 15.3%. The difference between this and the higher self-reported 18% figure I suspect comes from peope who feel a bit crap a lot of the time, but aren’t signficiantly prevented from their activities. So I would estimate the number of disabled people to be more like 15% than 20%. For those under 65 this is 11%. The NDIS has a similar definition but includes social activities as well, but don’t yet provide any breakdowns.

Disability table 2

So much for the figures from the ABS, which I think we can all agree are definitive, right?  Looking at the ABS figures for this group (under 65) they total 345,000. But wait! The figure of 15.3% is based on a total number of respondents to the census of only 9.5 million people. If the reportage rate was the same as the general population of 22m then there would be about 700,000 severe or profoundly disabled people. But the Productivity Commission only estimates 330,000 or half this number on the NDIS! The alternative to the unlikely event that less than 50% of profoundly or severely disabled people will end up on the NDIS is that the reported ABS figure for people in this category is correct but the rate is wrong. While the overall reportage rate is about 50% it looks like the reportage rate for disabled people in the severe and profound category is closer to 100%. If this was also true for the other categories of disabled people then that suggests that the real rate of disability is less than 9% and maybe as low as 7%. Assuming the reportage rate is the same as the rest of the population, ie 50%, for the other categories then the disability rate might be as high as 13%. So lets split it and say 10%. In any event the widely reported figure of 20% is well above the highest estimates based on the ABS and Productivity Commission data. The real rate of disability is closer to 10% than 20%.

Does it matter? Maybe. If you claim that 20% of the population are disabled, people start quickly calculating that the cost is unsupportable if all of those people are on the NDIS! Which of course they won’t be. Fewer than half of disabled people are already on the DSP. Less than half of those will transfer to the NDIS. Overstating the percentage of disabled people isn’t necessarily a good argument for the NDIS if it reduces support from otherwise sympathetic people.

A final thought: in the large Australian organisation I work for, there are a fair few disabled people, some of whom I think would be categorised as severe. With proper support many disabled people can gain suitable education or training and hence employment and support themselves and contribute to the economic activity of the nation. The more people with disability who are employed the fewer on the DSP or NDIS, the more money for those who really have no choice. Supporting people with disability into employment is as important, in my opinion, as supporting them in living and care through the NDIS.

[This article was rewritten following some comments and some further research. In line with all my articles on Stubbornmule this article is about estimating rough numbers from scarce data “back of the beercoaster” style rather than disability politics, it just happens I have a personal interest in this subject]


Echo Alpha Romeo

The Mule is travelling and, while he contemplates possible posts by the sea, regular guest contributor James Glover has stepped in with an analysis of applause.

It may seem indulgent (and possibly non productive) but Friday Afternoon Physics (FAP) like Friday Afternoon Maths is one of my favourite activities. It is also the holiday season and as the owner of this blog is busy counting echidnas down south I wanted to share what is so far my favourite FAP for 2013.

The question is: why isn’t it the case that the more people are clapping in an auditorium that the volume actually decreases?

Now this may seem like a dumb question. Surely the more people clapping = more noise = louder. Right? Well except that it isn’t true. Noise cancelling headphones work by detecting the incoming noise signal and producing a signal of exactly the opposite shape. Detected sound (by our auditory system – ears + brain) is a variation in the ambient pressure. So noise cancelling headphones actually produce, on average, a slightly higher pressure as they carry energy from both the original signal and the noise cancelling signal. Your ear drums don’t care what the overall pressure is though (up to a point) just the differences. An electrical device can’t easily produce an exactly cancelling sound but it can produce a sound with the opposite signal at the same average pressure. So if a lot of people are clapping randomly then while this increases the overall pressure the differential contributions tend, on average, to cancel each other out.

This just doesn’t just seem counterintuitive but experience suggests that the clapping in a room with more people is louder than a room with less people. But it would appear mathematically to be correct. Just as the average of a sequence of random variables has a smaller average amplitude from the mean (the standard deviation) so it must be the case that the more people in a room clapping will have a higher average pressure (not detectable by our auditory system) but also a lower variation (which is what we detect as volume).

Our solution: people nearby are less likely to average out than people far away. If we divided the room into people nearby (say under 25m) and those further out it would appear that the smaller contribution from those further out is not just because they are further way but because there contribution actually evens out. It raises the overall pressure but not so much the apparent volume. In fact in the simplest models it would actually decrease the more people there were present!

My insight is this: next time you are in a concert hall with say 1000 people compare the apparent volume with say a hall with 10,000 people. It would be deafening if it scaled up with distance alone. I am not sure about this but it appears to be true.

My final piece of evidence for this is the following: modern concert halls try to reduce unnecessary echoes, but total reduction of echo is bad. They used to do this by adding partial noise absorbing materials to the roof and walls. That is a disaster. A concert hall without echoes is a soulless place. Modern concert halls add random topographical features (usually in the ceiling at eg. random heights) that produce decoherence. Decoherence means the sound waves reflected back have different phases and hence quickly (but not too quickly) are undetectable as they tend to cancel each other out. The refurbishment of Hamer Hall in Melbourne did exactly this. So the solo horn in Brahms First Piano Concerto reflects but doesn’t reverb. Something similar happens with those noise “reduction” walls you see along freeways. They don’t absorb noise but all those cockatoos and gum leaves act to randomise the noise signal from the highway and even it out – the ear doesn’t notice the increase in average pressure but enjoys the decrease in variation.

Friday Afternoon Physics is good. It doesn’t lead to Nobel (or IgNobel) prizes but occasionally leads to Back of the Beer Coaster Calculations. Just prior to our discussion of this question we also worked out that 2-3 tankers a day could supply Melbourne with fresh water from Antarctica. But that’s another post.

Editors note: the echidna count so far has been zero.