Getting Australia Post out of the red

John Carmody returns to the Mule in his promised second guest post and takes a close look at Australia Post’s profitability with some (ahem) back-of-the-envelope calculations.

There are many forms of communication which underpin the function and productivity of a modern society like Australia. Despite the Cassandra-commentary from Mr Ahmed Fahour (the well-paid CEO of Australia Post), regular mail delivery certainly remains one of them.

In making his tendentious, but opaque, points, he has not been entirely frank with the community. He has, for instance, claimed that 99% of our mail is electronic. That assertion is meaningless because so much e-mail is advertising, brief inter- or intra-office memos and notices, or quick substitutes for telephone calls. When these are removed from the calculation, the importance of “hard mail” becomes more obvious

The data which the Herald has published (for instance, “Please Mr Postman: snail mail doomed to disappear“, 14 June) also show how shallow or formulaic Mr Fahour’s thinking seems to be. In 2012-13 Australia Post made an after-tax profit of $312 million and if there had been no losses on the handling of letters, that would have been $530 million. Do Australians really want a profit of that magnitude from such a vital national service?

But when one looks at that “letter-loss” a little more closely and at the figure of 3.6 billion letters delivered that year, it is clear that the loss per letter was 6.5 cents. In other words, if instead of recently increasing the cost of a standard letter to 70 cents, this had been to 75 cents, the losses would have been comprehensively dealt with.

Some comparisons might be informative. The British Royal Mail currently charges about $A1.10 for delivery of a standard (20g) letter for next-day delivery within the UK (its “aim”) and $A0.95 if you’re happy for delivery within 3 days. The Deutsche Post charges the equivalent of 86 Australian cents for delivery within Germany but about $A1.08 cents to adjacent France. Given that we currently pay only 70 cents for delivery across a far larger area, my suggested price of 75 cents seems reasonable and justified.

The government’s medical fairyland

For the first time in a while, John Carmody returns to the Stubborn Mule with the first of two guest posts. He argues that the government’s proposed medical “co-payments” do not add up.

The government continues to flounder about many details of its budget and part of the reason is a lack of stated clarity about its intentions (although the electors are drawing their own conclusions about those intentions and whether they are fair and honest). The proposed $7 “co-payment” for GP visits is an example of this lack of frankness.

On the one hand, the Government – purporting to be concerned about an excessive patronage of GPs – seems to want us to visit our doctors less frequently than the 6 visits which every man, woman and child currently makes each year (i.e. about once in two months for all of us, an internationally comparable figure, incidentally) . On the other hand, it has, so to speak, attempted to sugar-coat this unpleasant pill by promising that, while a little of that fee will go to the practitioners, most of it will go into a special fund (to be built up to $20 billion over the next 6 years) to boost medical research (and thereby do us all a great deal of good). Neither claim survives scrutiny.

The $2 proposed share to GPs will not compensate them for the extra administrative costs which they will have to carry on behalf of the Government; nor will that nugatory sum compensate for the progressive tightening of the reimbursement of doctors from “Medicare”; so the Government’s share will, to be realistic, need to be significantly less than $5. After dealing with its own extra administrative costs, therefore, the Government will probably only be able to put $3-4 per GP consultation into the proposed research fund. To build that fund up to the $20 billion proposed will require every Australian to visit the GP about 50 times each year – once each week. How this is going to reduce our alleged “overuse” of medical services has not been explained. Nor has how, in practice, it can be achieved. The Government is living in Fairyland.

 

Government spending

Before, during and after this month’s budget, Treasurer Joe Hockey sounded dire warnings about Australia’s “budget emergency”. Amidst this fear-mongering, it was a pleasant relief to come across a dissenting view. In a recent interview on 2SER Dr Stephanie Kelton (Department of Economics at the University of Missouri in Kansas City) argued that the government budget is very different from a household budget, however appealing that analogy might be. Governments like the Australian government, with its own free-floating currency can spend more than they take in taxation without worrying about running out of money. While the economy is weak, the government can comfortably run a deficit. The constraint to worry about is the risk of  inflation, which means curbing spending once the economy heats up.

I posted a link to Facebook, and immediately drew comment from a more conservatively libertarian-minded friend: “of course a deficit is a bad thing!”. Pressed for an explanation, he argued that government spending was inefficient and “crowded out” more productive private sector investment. This did not surprise me. Deep down, the primary concern of many fiscal conservatives is government spending itself, not a deficit. This is easy to test: ask them whether they would be happy to see the deficit closed by increased taxes rather than decreased spending. The answer is generally no, and helps explain why so many more traditional conservatives are horrified by the prospect of the Coalition’s planned tax on higher income earners….sorry, “deficit levy”.

From there, the debate deteriorated. North Korea was compared to South Korea as evidence of the proposition that government spending was harmful, while a left-leaning supporter asked whether this meant Somalia’s economy should be preferred to Sweden’s. Perhaps foolishly, I proffered a link to an academic paper (on the website of that bastion of left-wing thought, the St.Louis Fed) which presented a theoretical argument to the “crowding out” thesis. My sparring partner then rightly asked whether the thread was simply becoming a rehash of the decades old Keynes vs Hayek feud, a feud best illustrated by Planet Money’s inimitable music video.

Macroeconomic theory was never going to get us anywhere (as I should have known only too well). Instead, the answer lay in the data, with more sensible examples than North Korea and Somalia. Aiming to keep the process fair, avoiding the perils of mining data until I found an answer that suited me, here was my proposal:

I’m going to grab a broad cross-section of countries over a range of years and compare a measure of government expenditure (as % of GDP to be comparable across countries) to a measure of economic success (I’m thinking GDP per capita in constant prices).

If indeed government spending is inherently bad for an economy, we should see a negative correlation: more spending, weaker economy and vice versa. My own expectation was to see no real relationship at all. In a period of economic weakness, I do think that government spending can provide an important stimulus, but I do not think that overall government spending is inherently good or bad.

The chart below illustrates the relationship for 32 countries taken from the IMF’s data eLibrary. To eliminate short-term cyclical effects, government spending and GDP per capita (in US$ converted using purchasing power-parity) was averaged over the period 2002-2012.

Govt. Spending vs GDP

The countries in this IMF data set are all relatively wealthy, with stable political structures and institutions. All but one is classified as a “democracy” by the Polity Project (the exception is Singapore, which is classified as an “anocracy” due to an assessment of a high autocracy rating). This helps to eliminate more extreme structural variances between the countries in the study, providing a better test of the impact of government spending. Even so, there are two outliers in this data set. Luxembourg has by far the highest GDP per capita and Mexico quite low GDP per capita, with the lowest rate of government spending.

The chart below removes these outliers. There is no clear pattern to the data. There is no doubt that government spending can be well-directed or wasted, but for me this chart convincingly debunks a simple hypothesis that overall government spending is necessarily bad for the economy.

Government Spending vs GDP per capita

Now look for the cross (+) on the chart: it is Australia (IMF does not include data for New Zealand and we are the sole representative of Oceania). Despite Hockey’s concerns about a budget emergency, Australia is a wealthy country with a relatively low rate of government spending. Among these 30 countries, only Switzerland and South Korea spend less. These figures are long run averages, so perhaps the “age of entitlement” has pushed up spending in recent years? Hardly. Spending for 2012 was 35.7% compared to the 2002-2012 average of 35.3%. The shift in the balance of government spending from surplus to deficit is the result of declining taxation revenues rather than increased spending. Mining tax anyone?

Randomness revisited (mathsy)

My recent randomness post hinged on people’s expectations of how long a run of heads or tails you can expect to see in a series of coin tosses. In the post, I suggested that people tend to underestimate the length of runs, but what does the fox maths say? The exploration of the numbers in this post draws on the excellent 1991 paper “The Longest Run of Heads” by Mark Schilling, which would be a good starting point for further reading for the mathematically inclined.. When I ran the experiment with the kids, I asked them to try to simulate 100 coin tosses, writing down a sequence of heads and tails. Their longest sequence was 5 heads, but on average, for 100 tosses, the length of the longest run (which can be either heads or tails) is 7. Not surprisingly, this figure increases for a longer sequence of coin tosses. What might be a bit more surprising is how slowly the length of longest run grows. Just to bump up the average length from 7 to 8, the number of tosses has to increase from 100 to 200. It turns out that the average length of the longest run grows approximately logarithmically with the total number of tosses. This formula gives a pretty decent approximation of the expected length:

average length of longest run in n tosses ≃ logn + 1/3

The larger the value of n, the better the approximation and once n reaches 20, the error falls below 0.1%.

Expected length of runs

Growth of the Longest Run

However, averages (or, technically, expected values) like this should be used with caution. While the average length of the longest run seen in 100 coin tosses is 7, that does not mean that the longest run will typically have length 7. The probability distribution of the length of the longest run is quite skewed, as is evident in the chart below. The most likely length for the longest run is 6, but there is always a chance of getting a much longer run (more so than very short runs, which can’t fall below 1) and this pushes up the average length of the longest run. Probability distribution for 100 flips

Distribution of the Longest Run in 100 coin tosses

What the chart also shows is that the chance of the longest run only being 1, 2 or 3 heads or tails long is negligible (less than 0.03%). Even going up to runs of up to 4 heads or tails adds less than 3% to the cumulative probability. So, the probability that the longest run has length at least 5 is a little over 97%. If you ever try the coin toss simulation experiment yourself and you see a supposed simulation which does not have a run of at least 5, it’s a good bet that it was the work of a human rather than random coin. Like the average length of the longest run, this probability distribution shifts (approximately) logarithmically as the number of coin tosses increases. With a sequence of 200 coin tosses, the average length of the longest run is 8, the most likely length for the longest run is 7 and the chances of seeing a run of at least 5 heads or tails in a row is now over 99.9%. If your experimental subjects have the patience, asking them to simulate 200 coin tosses makes for even safer ground for you to prove your randomness detection skills. Probability distribution for 200 flips

Distribution of the Longest Run in 200 coin tosses

What about even longer runs? The chart below shows how the chances of getting runs of a given minimum length increase with the length of the coin toss sequence. As we’ve already seen, the chances of seeing a run of at least 5 gets high very quickly, but you have to work harder to see longer runs. In 100 coin tosses, the probability that the longest run has length at least 8 is a little below 1/3 and is still only just over 1/2 in 200 tosses. Even in a sequence of 200 coin tosses, the chances of seeing at least 10 heads or tails in a row is only 17%.

Run probability profiles

Longest Run probabilities

Getting back to the results of the experiment I conducted with the kids, the longest run for both the real coin toss sequence and the one created by the children was 5 heads. So, none of the results here could help to distinguish them. Instead, I counted the number of “long” runs. Keeping the distribution of long runs for 100 tosses in mind, I took “long” to be any run of 4 or more heads or tails. To calculate the probability distribution for “long” runs, I used simulation*, generating 100,000 separate samples of a 100 coin toss sequences. The chart below shows the results, giving an empirical estimate of the probability distribution for the number of runs of 4 or more heads or tails in a sequence of 100 coin tosses. The probability of seeing no more than two of these “long” runs is only 2%, while the probability of seeing 5 or more is 81%.

These results provide the ammunition for uncovering the kids’ deceptions. Quoting from the Randomness post:

One of the sheets had three runs of 5 in a row and two runs of 4, while the other had only one run of 5 and one run of 4.

So, one of the sheets was in the 81% bucket and one in the 2% bucket. I guessed that the former was the record of coin tosses and the second was devised by the children. That guess turned out to be correct and my reputation as an omniscient father was preserved! For now.

Runs at least 4 long

If you have made it this far, I would encourage you to do the following things (particularly the first one):

  1. Listen to Stochasticity, possibly the best episode of the excellent Radiolab podcast, which features the coin toss challenge
  2. Try the experiment on your own family or friends (looking for at least 3 runs of 5 or more heads or tails and ideally at least one of 6 or more)
  3. Share your results in the comments below.

I look forward to hearing about any results.

* UPDATE: I subsequently did the exactly calculations, which confirmed that these simulated results were quite accurate.

Do Daleks use toilet paper?

I have been watching some (very) old Doctor Who episodes, including the first ever serial featuring the archetypal villains, the Daleks. In this story, the Daleks share a planet with their long-time enemies, the Thal. After a war culminating in the denotation of a neutron bomb, both races experience very different mutations. The Daleks have become shrunken beasts that get about in robotic shells, while the more fortunate Thals mutated into peace-loving blondes.

The Thals hope to make peace with the Daleks, but the Daleks have more fiendish plans and plot to lure the Thals into their city with a gift of food and then ambush them. It is a good plan, but it is the choice of gifts that left me bemused. There is plenty of fruit and some large tins whose contents remain undisclosed. These may be reasonable choices, although I do find it hard to picture the Daleks stacking melons with their plunger hands. But the trap also appears to feature stacks of toilet paper. Granted, toilet paper may be an appealing luxury for the Thal, who have been trekking through the jungle for a year, but the real question here is, why do Daleks even have toilet paper?

Dalek ambush

Randomness

With three children, I have my own laboratory at home for performing psychological experiments. Before anyone calls social services, there is an ethical committee standing by (their mother).

This evening, I tried out one of my favourites: testing the perception of randomness. Here is the setup: I gave the boys two pieces of paper and a 20 cent coin. I was to leave the room, then they had to decide which of the two sheets of paper would be decided by the boys and which by a coin. Having made their choice, they then had to write down on one of the sheets their best attempt at a “random” sequence of 100 heads (H) and tails (T). Having done that, they were then to toss the coin 100 times, writing down on the other page the sequence of heads and tails that came up. I would then return to the room and guess which one was determined by the toss of the coin, and which by the boys.

I identified which sequence was which in less than 30 seconds. How did I do it?

The trick is to look for the longer sequences. Much like the gambler at the roulette wheel, the kids assume that a run of heads cannot last too long. One of the sheets had three runs of 5 in a row and two runs of 4, while the other had only one run of 5 and one run of 4. I correctly picked that the sheet with more long runs was determined by the coin toss.

Try it yourself sometime. If you see a run of 6 or more (which is in fact quite probable in a sequence of 100 coin tosses), you can quite confidently pick that as the coin toss, unless your subject has been well schooled in probability.

Our intuition struggles with randomness. We tend to assume randomness is more regular than it is. On the other hand, we also try to find patterns where there is only randomness, whether it is the man in the moon, clouds that look like things, the face of Mary on a piece of toast or, perhaps,  an explanation for the disappearance of MH 370.

Chinese non-residents…in China

CCTVRecently I travelled to China for the first time. My first glimpse of Beijing took in the Escher-like headquarters of Chinese TV station CCTV. It is an extraordinary building and to get a proper sense of it, you have to see it from a number of different angles.

Driving across the city, impressed by the scale of the place, I asked one of my hosts about the population of Beijing. He told me there were about 40 million, including non-residents. Almost double the entire population of Australia. Maybe it’s an exaggeration, but more than the figure itself it was the reference to “non-residents” that piqued my interest. Where there really so many people moving to China as to have a significant impact on the population of the capital?

Later, I learned that these non-residents were in fact people from other provinces. Under China’s Hukou system, restrictions are imposed on people’s ability to move from one part of the country to another. Many people from rural areas are drawn to cities to find work, but without residency rights for the city in which they work they cannot access public education or health care. So, Beijing is full of married men who have left their families at home in the provinces. Living in tiny apartments, they work all year and then travel back to their families for Chinese New Year, taking their earnings with them.

Being used to freedom of movement in Australia, it’s hard not to see this as a harsh system. But, reflecting on the numbers, China is a country of 1.3 billion people; if there are already 30 to 40 million people in Beijing, how would the city cope with a sudden influx of millions more? Only a few days ago, the central committee of China’s communist party released new targets to increase urbanisation from 53.7% of the population to 60% by 2020. This plan involves granting urban hukou status to an additional 100 million rural migrant workers. Even so, another 200 million migrants will remain non-residents. It is sobering to consider the potential consequences of granting full freedom of migration to the entire population rather than managing the process in this highly controlled fashion.

I’m not about to renounce my belief in democracy (however challenged it may be in many Western countries today), but, much like the CCTV building, it seems that to better understand China, you have to see it from a number of different angles.

Bringing Harmony to the Global Warming Debate

For some time now, our regular contributor James Glover been promising me a post with some statistical analysis of historical global temperatures. To many the science of climate change seems inaccessible and the “debate” about climate change can appear to come down to whether you believe a very large group of scientists or a much smaller group of scientists people. Now, with some help from James and a beer coaster, you can form your own view.

How I wish that the title of this article was literally true and not just a play on words relating to the Harmonic Series. Sadly, the naysayers are unlikely to be swayed, but read this post and you too can disprove global warming denialism on the back of a beer coaster!

It is true, I have been promising the Mule a statistical analysis of Global Warming. Not only did I go back and look at the original temperature data but I even downloaded the data and recreated the original “hockey stick” graph. For most people the maths is quite complicated though no more than an undergraduate in statistics would understand. It all works out. As a sort of professional statistician, who believes in Global Warming and Climate Change, I can only reiterate my personal  mantra: there is no joy in being found to be right on global warming.

But before I get onto the beer coaster let me give a very simple explanation for global warming and why the rise in CO2 causes it. Suppose I take two sealed glass boxes. They are identical apart from the fact that one has a higher concentration of CO2. I place them in my garden (let’s call them “greenhouses”) and measure their temperature, under identical conditions of weather and sunshine, over a year. Then the one with more CO2 will have a higher temperature than the one with less. Every day. Why? Well it’s simple: while CO2 is, to us, an “odourless, colourless gas” this is only true in the visible light spectrum. In the infra-red spectrum, the one with more CO2 will be darker. This means it absorbs more infrared radiation and hence has a higher temperature. CO2 is invisible to visible light but, on it’s own, would appear black to infrared radiation.  The same phenomenon explains why black car will heat up more in the sun than a white one. This is basic physics and thermodynamics that was understood in the 19th century when it was discovered that “heat” and “light” were part of the same phenomenon, i.e. electromagnetic radiation.

So why is global warming controversial? Well, while what I said is undeniably true in a pair of simple glass boxes, the earth is more complicated than these boxes. Radiation does not just pass through, it is absorbed, reflected and re-radiated. Still, if it absorbs more radiation than it receives then the temperature will increase. It is not so much the surface temperature itself which causes a problem, but the additional energy that is retained in the climate system. Average global temperatures are just a simple way of trying to measure the overall energy change in the system.

If I covered the glass box containing more CO2 with enough aluminium foil, much of the sunshine would be reflected and it would have a lower temperature than its lower CO2 twin. Something similar happens in the atmosphere. Increasing temperature leads to more water vapour and more clouds. Clouds reflect sunshine and hence there is less radiation to be absorbed by the lower atmosphere and oceans. It’s called a negative feedback system. Maybe that’s enough to prevent global warming? Maybe, clouds are very difficult to model in climate models, and water vapour is itself a greenhouse gas. Increasing temperature also decreases ice at the poles. Less ice (observed) leads to less radiation reflected and more energy absorbed. A positive feedback. It would require a very fine tuning though for the radiation reflected back by increased clouds to exactly counteract the increased absorption of energy due to higher CO2. Possible, but unlikely. Recent models show that CO2 wins out in the end. As I as said, there is no joy to being found right on global warming.

So enough of all that. Make up your own mind. Almost time for the Harmony. Perusing the comments of a recent article on the alleged (and not actually real) “pause” in global warming I came across a comment to the effect that “if you measure enough temperature and rainfall records then somewhere there is bound to be a new record each year”. I am surprised they didn’t invoke the “Law of Large Numbers” which this sort of argument usually does. Actually The Law of Large Numbers is something entirely different, but whatever. So I asked myself, beer coaster and quill at hand, what is the probability that the latest temperature or rainfall is the highest since 1880, or any other year for that matter?

Firstly, you can’t prove anything using statistics. I can toss a coin 100 times and get 100 heads and it doesn’t prove it isn’t a fair coin. Basically we cannot know all the possible set ups for this experiment. Maybe it is a fair coin but a clever laser device adjusts its trajectory each time so it always lands on heads. Maybe aliens are freezing time and reversing the coin if it shows up tails so I only think it landed heads. Can you assign probabilities to these possibilities? I can’t.

All I can do is start with a hypothesis that the coin is fair (equal chance of heads or tails) and ask what is the probability that, despite this, I observed 100 heads in a row. The answer is not zero! It is actually about 10-30. That’s 1 over a big number: 1 followed by 30 zeros. I am pretty sure, but not certain, that it is not a fair coin. But maybe I don’t need to be certain. I might want to put a bet on the next toss being a head. So I pick a small number, say 1%, and say if I think the chance of 100 head is less than 1% then I will put on the bet on the next toss being heads. After 100 tosses the hypothetical probability (if it was a fair coin) is much less than my go-make-a-bet threshold of 1%. I decide to put on the bet. It may then transpire that the aliens watching me bet and controlling the coin, decide to teach me a lesson in statistical hubris and make the next toss tails and I lose. Unlikely, but possible. Statistics doesn’t prove anything. In statistical parlance the “fair coin” hypothesis is called the “Null Hypothesis” and the go-make-a-bet threshold of 1% is called the “Confidence Level”.

Harmony. Almost. What is the probability that if I had a time series (of say global temperature since 1880) that the latest temperature is a new record. For example the average temperature in Australia in 2013 was a new record. The last average global temperature record was in 1998. I think it is trending upwards over time with some randomness attached. But there are all sort of random process which produce trends, some of which are equally likely to have produced a downward trending temperature graph. All I can really do, statistically speaking, is come up with a Null Hypothesis. In this case my Null Hypothesis is that the temperature doesn’t have a trend but is just the result of random chance. There are various technical measures to analyse this, but I have come up with one you can fit on the back of a beer coaster.

So my question is this: if the temperature readings are just i.i.d. random processes (i.i.d. stands for “independent and identically distributed”) and I have taken 134 of these (global temperature measurements 1880-2014) what is the probability the latest one is the maximum of them all? It turns out to be surprisingly easy to answer. If I have 134 random numbers then one of them must be the maximum. Obviously. Since they are iid I have no reason to believe it will be the first, second, third,…, or 134th. It is equally likely to be any one of those 134. So the probability that the 134th is the maximum is 1/134 = 0.75% (as it is equally likely that, say, the 42nd is the maximum). If I have T measurements then the probability that the latest is the maximum is 1/T. So when you hear that the latest global temperature is a maximum, and you don’t believe in global warming, then be surprised. As a corollary if someone says there hasn’t been a new maximum since 1998 then the probability of this still being true, 14 years later, is 1/14 = 7%.

So how many record years do we expect to have seen since 1880? Easy. Just add up the probability of the maximum (up to that point) having occurred in each year since 1880. So that would be H(T) = 1 + 1/2 + 1/3 + … + 1/T. This is known as the Harmonic Series. It is famous in mathematics because it almost, but doesn’t quite converge. For our purposes it can be well approximated by H(T) =0.5772+ ln(T) where ln is the natural logarithm, and 0.5772 is known as the Euler-Mascharoni constant.

So for T=134 we get from this simple beer-coaster sized formula: H(134) = 0.5772+ln(134)= 5.47. (You can calculate this by typing “0.5772+ln(134)” into your Google search box if you don’t have a scientific calculator to hand). In beer coaster terms 5.47 is approximately 6. So, given the Null Hypothesis (which is that there has been no statistically significant upward trend since 1880) how many record breaking years do we expect to have seen? Answer: less than 6. How many have we seen: 22. 

Temperature peaks

Global temperatures* – labelled with successive peaks

If I was a betting man I would bet on global warming. But there will be no joy in being proven right.

James rightly points out that the figure of 22 peak temperatures is well above the 6 you would expect to see under the Null Hypothesis. But just how unlikely is that high number? And, what would the numbers look like if we took a different Null Hypothesis such as a random walk? That will be the topic of another post, coming soon to the Stubborn Mule!

* The global temperature “anomaly” represents the difference between observed temperatures and the average annual temperature between 1971 and 2000. Source: the National Climate Data Center (NCDC) of the National Oceanic and Atmospheric Administration (NOAA).

I’m with Felix

FT blogger Felix Salmon and venture capitalist Ben Horowitz have very different views of the future of Bitcoin. Salmon is a skeptic, while Horowitz is a believer. A couple of weeks ago on Planet Money they agreed to test their differences with a wager.

Rather than a simple bet on the value of Bitcoin, the bet centres of whether or not Bitcoin will move beyond its current status, as a speculative curiosity, to serve as a genuine basis for online transactions. The test for the bet will be a survey of listeners in five years’ time. If  10% or more of listeners are using Bitcoin for transactions, Horowitz wins. If not, Salmon wins. The winner will receive a nice pair of alpaca socks.

I have been fascinated by Bitcoin for some time now and have a very modest holding of 1.6 Bitcoin. Nevertheless, I believe that Felix is on the right side of the bet. I have no doubt that the technological innovation of Bitcoin will inform the future of digital commerce, but Bitcoin itself will not become a mainstream medium of exchange.

Volatility

Only days after the podcast, the price of Bitcoin tumbled as MtGox, the largest Bitcoin exchange in the world, suspended Bitcoin withdrawals due to software security problems. Sadly, this means my own little Bitcoin investment halved in value. It also highlights how much of a roller-coaster ride the Bitcoin price is on. As long as Bitcoin remains this volatile, it cannot become a serious candidate for ecommerce. It is just too risky for both buyers and sellers. Horowitz acknowledges that the Bitcoin market is currently driven by speculators, but is confident that the price will eventually stabilise. I doubt this. Even during its most stable periods, the volatility of Bitcoin prices is far higher than traditional currencies, and has been throughout its five year history.

Bitcoin drop

The Ledger

One of the key innovations of Bitcoin is its distributed ledger. Everyone installing the Bitcoin wallet software ends up downloading a copy of this ledger, which contains a record of every single Bitcoin transaction. Ever. As a result, there is no need for a central authority keeping tabs on who owns which Bitcoin and who has made a payment to whom. Instead, every Bitcoin user serves as a node in a large peer-to-peer network which collectively maintains the integrity of this master transaction ledger. This ledger solves one of the key problems with digital currencies: it ensures that I cannot create money by creating copies of my own Bitcoin. The power of the ledger does come at a cost. It is big! On my computer, the ledger file is now almost 12 gigabytes. For a new Bitcoin user, this means that getting started will be a slow process, and will make a dent in your monthly data usage. A popular way around this problem is to outsource management of the ledger to an online Bitcoin wallet provider, but that leads to the next problem.

Trust Problems

A big part of the appeal of Bitcoin to the more libertarian-minded is that you no longer have to place trust in banks, government or other institutions to participate in online commerce. In theory, at least. If you decide to use an online Bitcoin wallet service to avoid the problem of the large ledger, you have to trust both the integrity and the security capability of the service provider. The hacking of inputs.io shows that this trust may well be misplaced. Even if you have the patience and bandwidth to maintain your own wallet, trust is required when buying or selling Bitcoin for traditional currency. There are many small Bitcoin brokers who will buy and sell Bitcoin, but invariably you have to pay them money before they give you Bitcoin, or give them Bitcoin before you get your money. Perhaps the big exchanges, like MtGox, should be easier to trust because their scale means they have more invested in their reputation. But they are not household names, the way Visa, Mastercard or the major banks are. Growth of commerce on the internet has been built on trust in the names providing the transactions more than trust in the technology, which most people don’t understand. I would be very surprised to see the same level of trust being established in the Bitcoin ecosystem, unless major financial institutions begin to participate.

The Authorities

But will banks jump onto the Bitcoin train? I doubt it. Not because they are afraid of the threat to their oligopoly—most bankers still only have the vaguest idea of exactly what Bitcoin is, or how it works. What they do know is that virtual currencies are attractive to criminals and money launderers. Last year saw the FBI crackdown on Liberty Reserve, followed by the crackdown on the underground black-market site Silk Road. More recently, the CEO of one of the better-known Bitcoin exchanges was arrested for money-laundering. In the years since September 11, the regulatory obligations on banks to ensure they do not facilitate money laundering have grown enormously. The anonymity of Bitcoin makes it hard for banks to “know their customer” if they deal with Bitcoin and as law-enforcement increases its focus on virtual currencies, providing banking services to Bitcoin brokers becomes less appealing for banks. When I bought my Bitcoin last year, I used the Australian broker BitInnovate. For several months now, their Bitcoin buying and selling services have been suspended and, I’m only guessing, this may be because their bank closed down their accounts. To become a widely-accepted basis for commerce, Bitcoin will necessarily have to interface effectively with the traditional financial system. At the moment, the prospects for this don’t look good.

For these reasons, I think Felix has a safe bet, and can look forward to cosy feet in alpaca socks. But, even if Bitcoin does not become widely accepted, its technological innovations may well revolutionise commerce anyway. Banks around the world can adopt ideas like distributed ledgers and cryptographically secure, irrevocable transactions to make the mainstream global payments system more efficient.

Where Have All The Genres Gone?

The Mule has returned safely from the beaches of the South coast of New South Wales. Neither sharks nor vending machines were to be seen down there. We did, however, have a guest drop in. none other than regular blog contributor, James Glover. The seaside conversation turned to music and James has distilled his thoughts for a blog post.

It seems timely to have a post with titular reference to the classic ’60s folk protest song “Where Have All The Flowers Gone” written by Pete Seeger, who died this week at 92. But I have been thinking about this question for a while. Not really as a music question but a classification question. (If you are reading this in a pub you might like to take a beer coaster and have a competition with a friend to write down as many musical genres in 10 minutes as you can think. I assure you an argument will follow).

Humans have an enormous tendency to classify things but often on closer inspection these turn out to be imprecise or just wrong. History shows many examples. The classification of the Living Kingdom has gone from two (Plants and Animals) to five. Eukaryotes:Animals, Plants, Funghi and Protistas (e.g. algae); and, separately, Prokaryotes (no separate nucleus). The latter has been since split by some biologists into Bacteria and Archae (e.g. extremophiles). In addition, for example, we can’t agree on the number of continents versus large islands.

The point here is that what at first seemed like a very obvious and useful distinction becomes, as time passes, less distinct and may actually hinder further understanding or be proved wrong and discarded. For example in physics the early 20th century Atomic Model of electrons, protons and neutrons has been replaced by the Standard Model of which only the electron (of which there are now three types) has survived, and protons and neutrons consist of quarks and gluons, as well as neutrinos and Higg’s particles. The racial classification of the 19th century, highly problematic now (so much so we don’t use two of the original terms) but seemingly obvious at the time: Caucasians (Whites), Negroids (Blacks), Mongoloids (Asians), has similarly been shown by scientists to have no significant genetic basis. The term “intersex” (now an official gender classification in some countries in Europe, and Australia) denies the classic (and so apparently “obvious” it really didn’t need explaining or justifying until recently) binary gender classification of male/female.

There are, naturally, two types of “genreism”. The first is based on evolution and radiation from one or a small number of original sources . In biology the classification was originally based on form and function, called “cladism”, whereas now it is based on genetic lineage. This for example, is why birds are now classed as “avian dinosaurs” whereas when I was a child in school we learnt the vertebrates (animals with a backbone) were split into mammals, fish, birds, amphibians and reptiles. The second type of genreism is based on differentiation within coincidentally existing groups eg, fundamental particles, they all arose spontaneously (in the Big Bang in this case) rather than evolved from a single particle (or did they?). Ok, I guess there is also a third type of genre as well, which combines both, such as music or continents where the genres can arise spontaneously and then also evolve and split, or even combine. Oh dear.

Back to the music though. In another era circa 1987 I idly wondered if there was room for any more music genres. Trying to imagine a major new musical genre is pretty impossible with my level of musicality but towards the end of a decade that had given us New Romantic and HiNRG I thought maybe it had all been done. Turns out I was a little wrong, as we were soon to see the explosion of Techno/House/Rave music, HipHop and then in the 90s Grunge and Drum’n’Bass. Of course these are arguably not major new genres in the way that Punk and Disco were in the 70s. House music is Electronica (as is Drum’n’Bass) while Grunge is just Garage which itself is Rock music. HipHop is an extension of Rap. A quick search of “Electronica” on Wikipedia reveals several dozen sub genres which would be virtually indistinguishable to non aficionados or experts.

The point I’d really like to make (and I have asked this question online for several years to no avail) is why haven’t there been any new genres since before 2000?

So before considering that question what exactly is a “musical genre”? Given they are quite different, by definition, finding something they have in common doesn’t help. I guess they have different expressions of the following four components:

  1. Instruments, including vocals
  2. Beats
  3. Production/Arrangements
  4. Image

I am no musicologist so this list may not be exhaustive or even the right way to look at it. I added “image” because a lot of allegedly different musical styles at different times really sound quite similar if your remove the clothing and image. Like taste in food, taste in music can be largely down to looks. This is particularly true for Pop. But when it comes to genres it is very much “I don’t know what it is but I’ll know it when I hear it”. Which also means that unless you are “into” say electronica or metal or jazz it may all sound pretty much the same.

So what are the musical genres? You can find various lists on the internet including this graphically useful presentation of genres through time, but here is my list. I have included genres which are derived from the first in the list in brackets but often they are significant (more significant in the case of Disco) that their progenitor. I have also not listed what I consider to be “sub-genres” like Nu Metal, Trip Hop, New Electronica etc. These, arguably, come under derivations, deviations and revivals.

Gospel (Jazz, R&B, Soul)
Blues (R&B, Soul, Rock)
Rock (Folk, Psychedelia, Heavy Metal, Prog Rock Glam, Reggae, Punk, Indie, Garage, Grunge)
Electronica  (Techno, House, Rave, Drum’n’Bass, Chillout)
Rap (Scratch, Hip Hop)
Pop (Folk/Protest, Country & Western, Easy Listening, Indie, New Romantic, World Music, Lounge)
Funk (Disco, HiNRG, Techno, House)

It is not entirely linear of course, Disco (Bee Gees) clearly has more or less elements of Glam (early Bowie) and Funk (Sly Stone) depending if you are in Europe or America. I always thought Blondie was a Pop band, not a Punk (Sex Pistols, Ramones) band as they are often described in the U.S. Pop also contains a myriad of related styles with an emphasis on simple melodies and arrangements, though there are notable exceptions but even when (as in ABBA or Crowded House) the arrangements are actually quite complex they still sound quite simple to most listeners. Indie used to be based on relentlessly non-commercial music (Nick Cave but pick your own favourite who never had a top 40 hit, at least until they sold out) until R.E.M. crossed over and maintained both critical and commercial success. Before R.E.M. it was considered a truism that you could only have one or the other and Indie bands which later achieved major commercial success (Smashing Pumpkins) had invariably “sold out” and “lost cred” in the eyes of their early fans.

So maybe the answer is that there is no longer a need for musical genres. There is certainly plenty of “new” music. And as DIY production becomes possible due to advances in technology and the internet means people no longer need listen to a single local FM radio station which promotes particular bands and genres then the very notion of genre becomes less useful. This is not unprecedented, modern movements in the visual arts (Impressionism, Cubism, Dada, Surrealism, Abstract Expressionism) also have disappeared since the 60s when Pop Art (Warhol), Conceptual Art (Yoko Ono) and Street Art (Basquiat) finished them off. These days many artists work in multiple genres (Australia’s Patricia Piccinini is one) and the concept of “Art Movement” itself, which so majorly defined much of Art History (and coffee table Art Books) is now redundant.

So saluting folk/rock pioneer Pete Seeger maybe it’s time to put classification systems, for music at least, behind us and just recognise genres were “a long time passing” but now they’re a “long time gone”. (I should also point out that there are two types of people in the world, those who like classifying things, and those who don’t.)