Author Archives: Stubborn Mule

The price of protectionism

An  article in Friday’s Australian began

Ford has blamed Kevin Rudd’s $1.8 billion fringe benefits tax overhaul for halting production, forcing at least 750 workers to be stood down in rolling stoppages that will further imperil Labor’s chances of retaining the nation’s most marginal seat.

and goes on to report that the Federal Chamber of Automotive Industries has called on Labor to reverse its changes to the application of fringe benefits tax (FBT) to cars.

So what exactly has Labor done to put these jobs at risk?

The previous regime provided two mechanisms to determine tax benefits for expenses incurred for cars used for work purposes:

  1. the “log book” method, whereby the driver maintained records to show what proportion of their use of the car was for work rather than personal use, or
  2. an assumed flat rate of 20% work use of the car (regardless of how often the car is actually used for work purposes).

The government has eliminated the second option. So, the estimated $1.8 billion saving is due to the fact that a significant number of drivers using the 20% method could never come close to a 20% proportion of work use if they took the trouble to maintain a log book. Either that or they don’t think it is worth the effort to maintain the log book records.

While the elimination of this tax-payer largesse for drivers may come at a cost to workers in the car industry, does it really make sense to reverse the changes to save 750 jobs? These jobs would be saved at a cost to the tax payer of $2.4 million per job. Now these are just the jobs at Ford and (for now at least) we should acknowledge that some Holden jobs may also be saved, bringing the cost closer to $1 million per job.

The car industry in Australia has long benefited from government support, but surely there are better ways of saving these jobs. A job guarantee springs to mind.

Of course, industry protectionism is far from unique to Australia and this week I had my attention drawn to an extreme example in the small central American nation of Belize.

On 7 August, the parliament of Belize met for the first time since April. With so long between sittings, there were many bills for parliament to pass that day. Included among these was one which increased the already high import tariff on flour from 25% to 100%.

Wheat

Why such a dramatic increase? For some time, local bakers had been buying their flour from Mexico for 69 Belize dollars per sack (approximately A$38). It was hard to justify buying the more expensive local flour at BZ$81 per sack (A$45). The new tariff will push the price of Mexican flour up to around BZ$110 (A$61), which is good news for the domestic flour mill and its employees.

That domestic flour mill is operated by Archer Daniels Midland (ADM), one of the top 10 global commodity firms. This is the same ADM which is in the process of trying to buy GrainCorp, Australia’s largest agricultural business.

But back to Belize. ADM’s website proudly declares that it “employs more than 40 people” in its Belize mill. Presumably, parliament had an eye to saving these jobs from the threat of cheap Mexican flour when it hiked the import tariff. With a population of only 335,000, Belize is 1/70th the size of Australia. You could argue that saving 40 jobs in Belize is the equivalent of saving 2,800 in Australia and that this is a far more effective form of protectionism than reversing FBT reforms.

But protectionism always has consequences and in Belize these are easier to see than is often the case.

Bread in Belize is subject to price control, along with rice, beans and even local beer. By law, bakers must sell “standard loaves” of bread for BZ$1.75. The August sitting of parliament may have increased flour tariffs, but it did not increase the price bakers could charge for bread.

Bakers in Belize will see their profits squeezed, job losses may follow and there are more bakers in Belize than workers at the ADM mill. Needless to say, the Belize Baker’s Association is lobbying for an increase in the controlled price of bread.

Perhaps it is time for the Belize government to consider abandoning the flour tariff and trying a job guarantee instead.

ngramr – an R package for Google Ngrams

The recent post How common are common words? made use of unusually explicit language for the Stubborn Mule. As expected, a number of email subscribers reported that the post fell foul of their email filters. Here I will return to the topic of n-grams, while keeping the language cleaner, and describe the R package I developed to generate n-gram charts.

Rather than an explicit language warning, this post carries a technical language warning: regular readers of the blog who are not familiar with the R statistical computing system may want to stop reading now!

The Google Ngram Viewer is a tool for tracking the frequency of words or phrases across the vast collection of scanned texts in Google Books. As an example, the chart below shows the frequency of the words “Marx” and “Freud”. It appears that Marx peaked in popularity in the late 1970s and has been in decline ever since. Freud persisted for a decade longer but has likewise been in decline.

Freud vs Marx ngram chart

The Ngram Viewer will display an n-gram chart, but does not provide the underlying data for your own analysis. But all is not lost. The chart is produced using JavaScript and so the n-gram data is buried in the source of the web page in the code. It looks something like this:

// Add column headings, with escaping for JS strings.

data.addColumn('number', 'Year');
data.addColumn('number', 'Marx');
data.addColumn('number', 'Freud');

// Add graph data, without autoescaping.

data.addRows(
[[1900, 2.0528437403299904e-06, 1.2246303970897543e-07],
[1901, 1.9467918036752963e-06, 1.1974195999187031e-07],
...
[2008, 1.1858645848406013e-05, 1.3913611155658145e-05]]
)

With the help of the RJSONIO package, it is easy enough to parse this data into an R dataframe. Here is how I did it:

ngram_parse <- function(html){
  if (any(grepl("No valid ngrams to plot!",
                html))) stop("No valid ngrams.") 
    
  cols <- lapply(strsplit(grep("addColumn", html,
                               value=TRUE), ","),
                getElement, 2)
  
  cols <- gsub(".*'(.*)'.*", "\\1", cols)

I realise that is not particularly beautiful, so to make life easier I have bundled everything up neatly into an R package which I have called ngramr, hosted on GitHub.

The core functions are ngram, which queries the Ngram viewer and returns a dataframe of frequencies, ngrami which does the same thing in a somewhat case insensitive manner (by which I mean that, for example, the results for "mouse", "Mouse" and "MOUSE" are all combined) and ggram which retrieves the data and plots the results using ggplot2. All of these functions allow you to specify various options, including the date range and the language corpus (Google can provide results for US English, British English or a number of other languages including German and Chinese).

The package is easy to install from GitHub and I may also post it on CRAN.

I would be very interested in feedback from anyone who tries out this package and will happily consider implementing any suggested enhancements.

UPDATE: ngramr is now available on CRAN, making it much easier to install.

How common are common words?

One of my favourite podcasts is Slate’s Lexicon Valley. All about language, it is rigorous and detailed in its approach to the subject, which appeals to the closet academic in me, but also extremely entertaining. It is a sign of a good podcast to find yourself bursting out laughing while walking down a busy city street. Lexicon Valley is to blame for numerous moments of alarm for my fellow commuters.

In September last year, hosts Mike Vuolo (the knowledgeable one) and Bob Garfield (the funny one) interviewed linguist Geoffrey Nunberg, talking to him about his recent book, Ascent of the A-Word: Assholism the First Sixty Years. A half hour discussion of the evolution of the word “asshole”helps earn this podcast an “Explicit” tag in the iTunes store and, as a result, this will be the first Stubborn Mule post that may fall victim to email filters. Apologies in advance to anyone of a sensitive disposition and to any email subscribers this post fails to reach.

Nunberg traces the evolution of “asshole” from its origins among US soliders in the Second World War through to its current role as a near-universal term of abuse for arrogant boors lacking self-awareness. Along the way, he explores the differences between profanity (swearing drawing on religion), obscenity (swearing drawing on body parts and sexual activity) and plain old vulgarity (any of the above).

The historical perspective of the book is supported by charts using Google “n-grams”. An n-gram is any word or phrase found in a book and one type of quantitative analysis used by linguists is to track the frequency of n-grams in a “corpus” of books. After working for years with libraries around the world, Google has amassed a particularly large corpus: Google Books. Conveniently for researchers like Nunberg,with the help of the Google n-gram Viewer, anyone can analyse n-gram frequencies across the Google Books corpus. For example, the chart below shows that “asshole” is far more prevalent in books published in the US than in the UK. No surprises there.

"Asshole" frequency US vs UKUse of “asshole” in US and UK Books

If “asshole” is the American term, the Australian and British equivalent should be “arsehole”, but surprisingly arsehole is less common than asshole in the British Google Books corpus. This suggests that, while being a literal equivalent to asshole, arsehole really does not perform the same function. If anything, it would appear that the US usage of asshole bleeds over to Australia and the UK.

Asshole/Arsehole frequencies“asshole” versus “arsehole”

Intriguing though these n-gram charts are, they should be interpreted with caution, as I learned when I first tried to replicate some of Nunberg’s charts.

The chart below is taken from Ascent of the A-word and compares growth in the use of the words “asshole” and “empathetic”. The frequencies are scaled relative to the frequency of “asshole” in 1972* . At first, try as I might, I could not reproduce Nunberg’s results. Convinced that I must have misunderstood the book’s explanation of the scaling, I wrote to Nunberg. His more detailed explanation confirmed my original interpretation, but meant that I still could not reproduce the chart.

Nunberg's chart: asshole versus empathy

Relative growth of “empathetic” and “asshole”

Then I had an epiphany. It turns out that Google has published two sets of n-gram data. The first release of the data was based on an analysis of the Google Books collection in July 2009, described in the paper Michel, Jean-Baptiste, et al. “Quantitative analysis of culture using millions of digitized books” Science 331, No. 6014 (2011): 176-182. As time passed, Google continued to build the Google Books collection and in July 2012 a second n-gram data set was assembled. As the charts below show, the growth of “asshole” and “empathetic” is somewhat different depending on which edition of the n-gram data set used. I had been using the more recent 2012 data set and, evidently, Nunberg used the 2009 data set. While either chart would support the same broad conclusions, the differences show that smaller movements in these charts are likely to be meaningless and not too much should be read into anything other then large-scale trends.

Empathy frequency: 2009 versus 2012Comparison of the 2009 and 2012 Google Books corpuses

So far I have not done very much to challenge anyone’s email filters. I can now fix that by moving on to a more recent Lexicon Valley episode, A Brief History of Swearing. This episode featured an interview with Melissa Mohr, the author of Holy Shit: A Brief History of Swearing. In this book Mohr goes all the way back to Roman times in her study of bad language. Well-preserved graffiti in Pompeii is one of the best sources of evidence we have of how to swear in Latin. Some Latin swear words were very much like our own, others were very different.

Of the “big six” swear words in English, namely ass, cock, cunt, fuck, prick and piss (clearly not all as bad as each other!), five had equivalents in Latin. The only one missing was “piss”. It was common practice to urinate in jars left in the street by fullers who used diluted urine to wash clothing. As a result, urination was not particularly taboo and so not worthy of being the basis for vulgarity. Mohr goes on to enumerate another five Latin swear words to arrive at a list of the Roman “big ten” obscenities. One of these was the Latin word for “clitoris”, which was a far more offensive word than “clit” is today. I also learned that our relatively polite, clinical terms “penis”, “vulva” and “vagina” all derive from obscene Latin words. It was the use of these words by the upper class during the Renaissance, speaking in Latin to avoid corrupting the young, that caused these words to become gentrified.

Unlike Nunberg, Mohr does not make use of n-grams in her book, which provides a perfect opportunity for me to track the frequency of the big six English swear words.

Big 6 SwearwordsFrequency of the “Big Six” swear words

The problem with this chart is that the high frequency of “ass” and “cock”, particularly in centuries gone by, is likely augmented by their use to refer to animals. Taking a closer look at the remaining four shows just how popular the use of “fuck” became in the second half of the twentieth century, although “cunt” and “piss” have seen modest (or should I say immodest) growth. Does this mean we are all getting a little more accepting of bad language? Maybe I need to finish reading Holy Shit to find out.

Big 4 Swear WordsFrequency of four of the “Big Six” swear words

* The label on the chart indicates that the reference year is 1972, but by my calculations the reference year is in fact 1971.

Feedburner on the fritz

Those of you who have subscribed to email updates from the Stubborn Mule will have noticed some strange behaviour lately, as old blog posts have appeared in your inboxes. Why this is happening remains a mystery to me. The email subscriptions are powered by Google’s Feedburner service and, with the recent announcement that Google is shutting down Google Reader, I am starting to wonder whether Google is deliberately sabotaging Feedburner as a precursor to shutting it down too.

The sabotage theory is a bit too extreme, but certainly others are speculating that Feedburner’s days may be numbered. In any event, the time has come for me to look for an alternative in an attempt to stop the random emails.

I have looked at Feedblitz and have been bombarded with marketing materials as a result, so that one is not for me. Mailchimp is a possibility.

While I am weighing my options, I would welcome suggestions from other bloggers who have successfully made the move from Feedburner.

Can I trust MtGox with my passport?

Liberty Reserve logoIn March 2013, the US Financial Crimes Enforcement Network (“FinCen”) published a statement saying that companies which facilitate buying and selling of “virtual” currencies like Bitcoin constitute “money service businesses” and are subject to reporting obligations designed to prevent money laundering and other financial crimes.

A couple of months later, the seizure by US authorities of Liberty Reserve has shaken money service businesses around the world, whether they deal in “real” or “virtual” currencies.

Two days later, the largest Bitcoin exchange, MtGox, tightened their anti-money laundering (AML) controls, posting the following statement on its website:

Attention Users: From May 30th 2013 all withdrawals and deposits in fiat [real] currency will require account verification. However withdrawals and deposits in Bitcoin (BTC) do not require verification.

What MtGox is attempting to do here is meet one of the most fundamental requirements of AML legislation around the world: know your customer. It is so fundamental that it too earns its own three-letter abbreviation, KYC.

So, how does an online business like MtGox verify the identity of its customers? After all, you can’t walk into the local MtGox branch with a fist full of paperwork. Instead, you must upload a scan of “proof of identity” (passport, national ID card or driver’s licence) and “proof of residency” (a utility bill or tax return).

MtGox are not alone in this approach. More and more online money service businesses are attempting to get on the right side of AML rules by performing verification in this way.

Here in Australia, there are still some Bitcoin brokers which do no verification whatsoever, including BitInnovate (who helped me buy my first Bitcoin) and OmniCoins. Australia’s AML regulator, AUSTRAC publishes a list of  “designated services”, which make business subject to reporting obligations including customer verification. The list includes

exchanging one currency (whether Australian or not) for another (whether Australian or not), where the exchange is provided in the course of carrying on a currency exchange business

So I strongly suspect that all local Bitcoin brokers too will soon be demanding scans of your driving licence and electricity bill.

But is the MtGox approach to customer verification a good idea? I don’t think so. I believe it is a bad idea for MtGox and a bad idea for their customers.

It is a bad idea for MtGox because scans of fake identity documents are very easy to come by. For example, one vendor at the online black market Silk Road offers custom UK passport scans with the name and photo of your choice, complete with a scan of a matching utility bill.

It’s a bad idea for the customer too, because it exposes them to increased risk of identity theft. Although my intentions were not criminal, I chose BitInnovate when I bought Bitcoin precisely because I did not have to provide any personal documents. How well do you know MtGox or any other online money service? How confident are you that they will be able to keep their copies of your documents secure? Securing data is hard. Every other week it seems that there are stories of hackers gaining access to supposedly secure password databases. I have no doubt that scans of identity documents will also find their way into the wrong hands.

So what is the alternative?

Third party identity management.

Using a passport or driver’s licence scan is effectively outsourcing identity verification to the passport office or motor registry respectively. Before the days of high quality scanning and printing, these documents were difficult to forge. A better solution is to retain the idea of outsourcing, but adapt the mechanism to today’s technology.

Here’s how it could work.

A number of organisations would establish themselves as third party identity managers. These organisations should be widely trusted and, ideally, have existing experience in identity verification. Obvious examples are banks and government agencies such as the passport office.

Then if I wanted to open an account with MtGox, its website would provide a list of identity managers it trusted. Scrolling through the list, I may discover that my bank is on the list. Perfect! When I first opened an account with my bank I went through an identity verification (IDV) check (ideally, this would have been done in person and, even better, the bank would have some way to authenticate my passport or driver’s licence*), so my bank can vouch for my identity. I can then click on the “verify” link and I am redirected to my bank’s website. Being a cautious fellow, I check the extended validation certificate, so I know it really is my bank. I then log into my bank using multi-factor authentication. My bank now knows it’s really me and it presents me with a screen saying that MtGox has asked for my identity to be validated and, in the process, has requested some of the personal data my bank has on file. The page lists the requested item: name, address, email address and nationality. I click “authorise” and find myself redirected to MtGox and a screen saying “identity successfully verified”.

MtGox is now more confident of my true identity than they would be with scanned documents and I have kept to a minimum the amount of information I need to provide to MtGox: no more than is required to meet their AML obligations.

This authentication protocol is a relatively straightforward enhancement to the “OAuth” protocol used by sites like Twitter and Facebook today. OAuth itself is subject to some controversy, and it may be better to create a new standard specifically for high trust identity management applications like this, but the tools exist to put identity management on a much safer footing.

* Today, unfortunately, banks and other private sector entities are not readily able to authenticate passports or driver’s licences. Once government agencies are able to provide this service, the options for third party identity management will be even greater.

 

BitTorrent Sync

BitTorrent Sync logoI have been a long-time user of Dropbox. It synchronises important files across computers, provides offsite backup and remote access to these files. But it does have its limitations.

A free Dropbox accounts gets you 2 gigabytes of storage (although persuading friends to sign up can earn you an an increase in this limit). If you need more space, paid plans start at $10 per month.

I have found a new solution for file synchronisation without the size limits. BitTorrent Sync is still in its beta stage of development, but so far I have found it works very well. It is fast, efficient and does exactly what I want it to do.

BitTorrent Sync is not a cloud storage system, so it does not offer all of the features of DropBox. But anyone with with more than one computer, or anyone who wants to regularly share files with a friend or colleague will quickly find BitTorrent Sync an invaluable tool.

So what exactly does BitTorrent Sync do, and what doesn’t it do?

Two-Way Synchronisation – YES

BitTorrent sync really does one thing and one thing well: synchronisation. Install BitTorrent on two computers, point it at a folder on each computer and it will ensure that the contents of the two folders stay in sync. Change a file on one computer and it will change on the other. Add a new file and it will quickly appear on the other computer.

I have a desktop machine and a laptop. They both have Dropbox installed, so I usually save documents in my Dropbox folder to ensure I have access from both machines. But my Dropbox account is getting full, so if I am working with a large dataset or large image files, I keep them out of Dropbox. I then inevitably find I need to use those files on a different machine. BitTorrent Sync has solved that problem for me.

Synchronisation works like a rocket on a local network, but will also work over the internet. As the name suggests, BitTorrent Sync makes use of the same technology use in BitTorrent and is extremely efficient when it comes to dealing with very large files. Synchronisation over the internet when users at each end are behind their own routers works well, thanks to similar “NAT traversal” techniques to those used by Skype. All file transfers, whether local or over the internet, are encrypted. As long as you keep your secret safe, your data is safe.

Setting up synchronisation is straightforward. When you first point BitTorrent Sync at a folder, a “secret” is generated. Secrets are strings of numbers and letters, like this: WBUAH4P6P41KAPJ7ERSAWXY5RB2BCT28. Then, when setting up other machines to share the same folder, all you need to do is enter the secret from the first computer. Multiple machines can share the same folder with the same secret and BitTorrent Sync can also manage multiple folders with different secrets.

One-Way (Read Only) Synchronisation – YES

While Two-Way synchronisation works well for sharing files with family and friends. Sometimes you will want to give others read access to files without allowing them to delete or edit the files. This is where one-way synchronisation comes in. Each synchronised folder has a “read only secret” in addition to the main secret. Give this read only secret to your mother and she can see all of your family photos and you need not worry that she will accidentally delete any of them*.

As far as I know, Dropbox does not offer one-way synchronisation.

Mobile Access – NOT YET

Dropbox offers apps for iPhone, iPad and Android devices which allow you to access files on the go. Mobile apps for BitTorrent Sync are not yet available, but they are under development.

Cloud Backup – NO

BitTorrent Sync directly syncs content machine to machine. Dropbox, on the other hand, syncs each machine with the Dropbox’s own servers. If all of your computers suffer catastrophic failure, you can still recover your data from Dropbox. BitTorrent Sync does not provide any cloud backup. Of course, you could always set up a Rackspace server and install BitTorrent Sync there…

Web Access – NO

With all of your files on their servers, Dropbox can easily provide web access to your files. BitTorrent Sync cannot. The files will only be available on machines with BitTorrent Sync installed.

Version Control – NO

Another useful feature offered by Dropbox is version control. If you make some drastic edits to your latest presentation, which you later regret, Dropbox allows you to recover previously saved versions. BitTorrent Sync will not help you with version control.

BitTorrent Sync does not do as much as Dropbox and other cloud backup services. But what it does do, it does very well. I expect to get a lot of use out of it.

* Two-way synchronisation does provide protection against accidental deletion: when a file is deleted on one machine, copies on other machines are moved to a hidden folder rather than deleted, so they can be recovered later.

 

 

Unfounded liability

Today a tweet from “Australia’s most idiosyncratic economist” Christopher Joye caught my eye. I followed the link and found a scaremongering article trying to whip up concerns about Australia’s levels of government debt.

cjoye tweet

A key part of Joye’s argument is to accuse the government of creative accounting by including Future Fund assets in the calculation of net debt. Carving out these assets, along with some other tactics, leads him to assert that the true size of the government’s debt is around 40% not 11% of GDP. But it is Joye’s accounting that is flawed, not the government’s.

Joye’s argument centres on the notion that government pension obligations to public sector employees constitute an “unfunded liability”. Unlike other liabilities, i.e. government bonds, this liability is not included in the calculation of the government’s debt, thereby understating it. To remedy this, Joye argues that the calculation can be corrected by noting that the Future Fund was created with the precise purpose of funding these liabilities, so excluding them from the net debt calculation addresses the omission of the unfunded pension liabilities.

Superficially, this argument can sound plausible. But, closer scrutiny shows that Joye is cherry-picking to distort the numbers.

Analogies between government and household finances can be dangerous, but I will cautiously draw one here to illustrate the point. Imagine a family with a $300,000 house financed with a $200,000 mortgage, a net asset position of $100,000. Over time, the family works to save and pay down the mortgage. But they also want their daughter to attend a private high school and have been putting money aside into a saving fund to be able to afford the fees. A few years later, the debt has been paid down to $175,000 and they have put $25,000 into the school fund. So how does the family balance sheet look now? Assuming that property prices are unchanged, the family has assets of $325,000 (house and saving fund) and a debt of $175,00, so net assets of $150,000.

Not so fast, Christopher would argue! Those school fees are an unfunded liability! Since the school fund is there solely to fund that liability, it should be excluded, so the family only has assets of $125,000.

It’s nonsense of course. A commitment to pay pensions (or school fees) is a liability of sorts, in that in entails a commitment to making payments in the future. But why stop there? The government is also committed to making welfare payments, so there’s another unfunded liability. We can ignore the baby bonus, as that’s likely to be eliminated, but the government has a whole range of commitments for future payments.

But that ignores all the sources of future receipts for the government. If public pensions are an unfunded liability, what about the unfunded asset represented by all future income tax receipts? Corporate taxes provide another solid income stream, not factored into the governments assets.

The family’s school fees are a liability of sorts, but their capacity to earn income into the future effectively provides an even greater asset. Both are uncertain, which is why accountants stick to financial assets, like loans, bonds and deposits or even stocks, land or houses, all of which have a relatively clear value today and, more importantly, can be bought or sold for figures very close to those assessed values.

Christopher Joye drastically overstated the government’s net debt position by factoring in future government payments and ignoring future government receipts. As the less “idiosyncratic” economist Stephen Koukoulas eloquently put it:

This is like painting a red dot on a daddy long legs and telling people it is a redback spider.

Bitcoin: what is it good for?

Bitcoin has been a hot topic in the news over the last few weeks.

The digital currency has its adherents. The Winklevoss twins, made famous by the movie Social Network after suing Mark Zuckerberg for allegedly stealing the concept of Facebook, now purportedly own millions of dollars worth of Bitcoins.

It also has its detractors. Paul Krugman has argued that the whole enterprise is misguided. Bitcoin aficionados are, he writes, “misled by the desire to divorce the value of money from the society it serves”.

Still others cannot seem to make up their mind. Digital advocacy group, Electronic Frontier Foundation (EFF) accepted Bitcoin donations for a time, but became uncomfortable with its ambiguous legal status and shady associations, such as with the online black market Silk Road, and decided to stop accepting Bitcoin in 2011. A couple of years on and the EFF’s activism director is speaking at a conference on Bitcoin 2013: The Future of Payments.

Recent media interest has been fuelled by the extraordinary roller-coaster ride that is the Bitcoin price. In early April, online trading saw Bitcoins changing hands for over US$200. At the time of writing, prices are back below US$100. As with many markets, it’s hard to say exactly what is driving the price. Speculators, like the Winklevoss twins, buying Bitcoins will have helped push up prices, while reports that Silk Road has suffered both a deflation-driven collapse in activity and hacking attacks may have contributed to the down-swings.

Bitcoin (USD) prices

Although not obvious on the chart above, dramatic price movements are nothing new for Bitcoin. Switching to a logarithmic scale makes the picture clearer. After all, a $2 fall from a price of $10 is just as significant as a $40 fall from a price of $200. The 60% fall from $230 to $91 over April has certainly been dramatic. But back in June 2011, after reaching peak of almost $30, the price fell by 90% within a few months.

Bitcoin price history (log scale)

The volatility of Bitcoin prices is orders of magnitude higher than traditional currencies. Since the start of the year the price of gold has been tumbling, with a consequent spike in its price volatility. Even so, Bitcoin’s volatility is almost ten times higher. The chart below compares the volatilities of Bitcoin, gold and the Australian dollar (AUD).

Historical volatility of Bitcoin

A week or so ago, armed with this data, I was well advanced in my plans for a blog post taking Bitcoin as the basis for a reflection on the nature of money. I would start with some of the traditional, text-book characteristics of money. A medium of exchange? Bitcoin ticks this box, with a growing range of online businesses accepting payment in Bitcoin (including WordPress, so not just underground drug sites). A store of value? That’s more dubious, given the extremely high volatility. It may appeal to speculators, but with daily volatility of around 15%, it’s hard to argue that it is a low risk place to park your cash. A unit of account? Again, the volatility gets in the way.

That was the plan, until a conversation with a colleague propelled me in a different direction.

She asked me what this whole Bitcoin business was all about. Breezily, I claimed to know all about it, having first written about Bitcoin two years ago and then again a year later. I launched into a description of the cryptographic basis for the operation of Bitcoin and went on to talk about its extreme volatility.

I then remarked that when I first wrote about it, it was only worth about $1, but had since risen to over $200.

“So,” she asked, “did you buy any back then?”

That shut me up for a moment.

Of course I hadn’t bought any. What gave me pause was not that I had missed an investment opportunity that would have returned 20,000%, but that I was so caught up in the theory of Bitcoin that it had not occurred to me to see what transacting in Bitcoin was actually like in practice. So I resolved to buy some.

This turned out not to be so easy. While there are many Bitcoin exchanges, paying for Bitcoins means jumping through a few hoops. Perhaps because the whole philosophy of Bitcoin is to bypass the traditional banking system. Perhaps because banks don’t like the look of most of them and will not provide them with credit card services. Whatever the reason, your typical Bitcoin exchange will not accept credit card payments. Many insist on copies of a passport or driver’s licence before allowing wire transactions, neither of which I would be prepared to provide.

Eventually I found BitInnovate, which allows the purchase of Bitcoin through Australian bank branches. Even so, the process was an elaborate one. After placing an order on the site, payment must be made in person (no online transfers), in cash, at a branch within four hours of placing the order. If payment is not made, the order is cancelled. Elaborate, but manageable, and no identification is required.

But before I could proceed, I had to set myself up with a Bitcoin wallet. As a novice, I chose the standard Bitcoin-Qt application. I downloaded and installed the software, and then it began to “synchronise transactions”. This gets to the heart of how bitcoins work. As a purely digital currency, they are based on “public key cryptography”, which is also the basis for all electronic commerce across the internet. The way I make a Bitcoin payment to, say, Bob is to electronically sign it over to him using my secret “private key”. Anyone with access to my “public key” can then verify that the Bitcoin now belongs to Bob not me. Likewise, the way I get a Bitcoin in the first place is to have it signed over to me from someone else. In case you are wondering what one of these Bitcoin public keys looks like, mine is 1Q31t2vdeC8XFdbTc2J26EsrPrsL1DKfzr. Feel free to make Bitcoin donations to the Mule using that code!

In this way, rather than relying on a trusted third party (such as a bank), to keep track of transactions, the ownership of every one of the approximately 11 million Bitcoins is established by the historical trail of transactions going back to when each one was first “mined”. Actually, it’s worse than that, because Bitcoin transactions can involve fractions of a Bitcoin as well.

So, when my Bitcoin wallet told me it needed to “synchronise transactions”, what it meant was that it was about to download a history of every single Bitcoin transaction ever. No problem, I thought. Two days and 9 gigabytes (!) later, I was ready for action. Now I could have avoided this huge download by using an online Bitcoin wallet instead, but then I would have been back to trusting a third party, which rather defeats the purpose.

The cryptographic transaction trail may be the brilliant insight that makes Bitcoin work and I knew all about in it theory. But in practice, it may well also be Bitcoin’s fatal flaw. Today, a new wallet will download around 10 gigabytes of data to get started, and that figure will only grow over time. The more successful Bitcoin is, the higher the barrier to entry for new users will become. I suspect that means Bitcoin will either fail completely or simply remain a niche novelty.

Still, it is an interesting novelty, and despite the challenges, I decided to continue with my investigations and managed to buy a couple of Bitcoins. The seller’s commission was $20 and falling prices have since cost me another $20 or so. So, I am down on the deal, but, as I have been telling myself, I bought these Bitcoins on scientific rather than investment grounds.

Of course, if the price goes for another run, I reserve the right to change my explanation.

Quandl

I spend a lot of time trawling the internet for data, particularly economic and financial data. Yahoo Finance and Google Finance are handy for market data and “FRED”, the St. Louis Fed is an excellent, albeit US-centric, resource for a broad range of financial aggregates. While these sites make it very easy to automate data downloads, most sites (including, unfortunately, the Australian Bureau of Statistics) provide data in Excel format or other inconvenient forms. At times this has become sufficiently frustrating that I have periodically entertained vague plans to build my own time-series data web-site that would source data from across the world and the web, making it available in consistent, useful way.

Needless to say, I never got around to it, but it seems that someone else has. Today I stumbled across Quandl, which aggregates and re-publishes over 5 million time-series. The data can be presented as charts on their website, downloaded or accessed programmatically through their application programming interface (API). There is even an R package available to make it easy to load data directly into my favourite statistical package, R.

Here is an example of how it all works. Quandl has data on the Australian All Ordinaries index. To read this data into R, you will first need to register with Quandl and obtain an authentication key for the API. This key is a random string, which looks something like this jEGfHz9HF7C3zTus6ZuK (this one is not a real key!). Once you have your key, you can fire up R and install and load the R package by entering the following commands:

install.packages("Quandl")
library(Quandl)

Once this is done, you will need to find the Quandl code for the data you are interested in. Near the bottom of the Quandl page, there is a pane showing the data-set information, including the provenance of the data.

Screen Shot 2013-04-20 at 10.54.02 PM

Armed with the text labelled “Quandl Code”, in this case “YAHOO/INDEX_AORD”, you now have everything you need. I will assume you already have the ggplot2 and scales packages installed. To plot the history of the All Ordinaries, simply enter the following code (replacing the string in the third line with your own authentication key).

library(ggplot2)
library(scales)
Quandl.auth("jEGfHz9HF7C3zTus6ZuK")
aord ggplot(aord, aes(x=Date, y=Close)) + geom_line() + labs(x="")

All Ordinaries

I can see I am going to have fun with Quandl. It even has Bitcoin price history. But that is a subject for another post.

Wall of Liquidity

Once again a misconception is gaining currency. There is increased talk of a build up of cash just waiting to be converted into equities or other assets. I wrote about this years ago in cash on the sidelines, but apparently the financial commentariat did not read the post, so it is time to revisit the subject.

I believe that the reason the misconception is so widespread is that the subject is not discussed in technical terms, but in metaphors. Some of you have heard the phrase “the great rotation”, which refers to the idea that investors will shift en masse from cash and bonds to shares. It’s a compelling phrase, but it leaves one question unanswered: who will sell the shares to these rotating investors and, given that these sellers will be paid for their shares, what happens to the money they receive? It’s still cash after all. Likewise, if these rotators are selling their bonds, someone has to buy them. Post-rotation, there is still just as much cash in the system and just as many bonds. Cash and bonds don’t just magically turn into shares. Reality is messy…why spoil a good metaphor?

A simpler, more dramatic and more vacuous metaphor that has also made a reappearance is the “wall of liquidity”.

Wall of Liquidity

No one using this compelling phrase would be so crass as to explain what it means. Such is its power, it is assumed that we all know what it means. So, let’s have a look at “wall of liquidity” out in the wild. In an article about rising bank share prices, Michael Bennet wrote in The Australian:

But pump-priming by global central banks has created a so-called wall of liquidity looking for income that is flowing out of cash and into high-dividend-paying stocks, with banks attractive due to their fully franked dividends.

Here it certainly sounds as though “wall of liquidity” is just “cash on the sidelines” in a fancy suit. But let’s zero in for a moment on the other metaphor in this sentence, “pump-priming”. Doubtless, the author has the US Federal Reserve (Fed) in mind. The standard line runs something like this: with low interest rates and purchases of securities through the “QE” (quantitative easing) programs, the Fed has flooded the banks with liquidity. More prosaically, reserve balances (i.e. the accounts banks have with the Fed) have grown. So far so good, as the chart below shows.

The next step in this line of thinking is that as this cash builds, it is a “wall of liquidity” desperate to find somewhere to go and, in the quest for investments, it will push up asset prices.

But before we can accept this reasoning, there is an important point to note. Reserves with the Fed are assets of banks only. Contrary to a common misconception, these reserves cannot be lent, they can only be shuffled around from bank to bank. Nevertheless, there is a theory that, because in the US and some other countries, a certain percentage of bank deposits must be backed by reserve balances, there is a “money multiplier” which determines a fixed relationship between reserve balances and bank deposits*. If this theory is correct, bank deposits should have grown as dramatically as reserve balances. They have not.

M1 money

Taking the same chart and displaying it on a log scale shows that growth in deposit balances has been very steady over the last 20 years.

M2 - log scale

Whatever is going on in financial markets, it has nothing to do with a dramatic build up of cash which is poised to be converted into “risk assets”.

Yet another way to see this is to think about what is going on in Australian banks at the moment. Credit growth is slow in Australia. This is not because banks are reluctant to lend. Quite the contrary. Banks are looking at the slow credit growth and fretting about their ability to deliver the earnings growth that their shareholders have come to expect. The problem is that there is a lack of demand for credit as households and businesses continue to save and pay down debts. In response, banks have begun to compete aggressively on price and, in some cases, on terms to attempt to grow the size of their slice of a pie that is not growing. And yet these very same banks continue to compete for customer deposits. Australian banks are not sitting on vast cash reserves that are compelling them to lend. Rather it is simply renewed risk appetite that is driving banks to compete for lending.

The same is true around the world. Looking at cash balances as a sign that yields will fall and asset prices will rise is a pointless exercise. What is happening is much simpler. Animal spirits are emerging once more. Low interest rates (not cash balances) will help, but fundamentally it is risk appetite that drives markets.

The last time I heard people talking in terms of walls of liquidity was in 2005-2006 in the lead-up to the global financial crisis. These putative piles of cash were used to support a change of paradigm in which the returns for risk could stay low indefinitely. Of course this turned out to be dramatically wrong. The cash didn’t disappear, but risk appetite did. I am not predicting another crash yet, but I do foresee this nonsense being used to justify more risk-taking for lower returns. If that happens for long enough, then there will be another crash.

* As an aside, given that Australia has no minimum reserve requirements, if the money multiplier theory was valid, there should be an infinite amount of deposits in the Australian banking system. For the record, this is not the case.

Photo credit: AP