Sometime Stubborn Mule contributor, John Carmody, finds himself in the UK at the time of the Brexit vote and has filed the following report. Meanwhile, back here in Australia, the Mule is watching anxiously for signs that we are on the verge of the end of Western civilisation “in it’s entirety”.

On the night before the “Brexit” poll, London had heavy rain with much thunder and lightning: Donner and Blitzen if we want to be Europeans. Later that day London had further downpours with associated disorder with transport and traffic all of which created real difficulties in what was regarded as a “Remain” stronghold. It was very striking to me – having been in London for the past few days, how prominent the “Remain” supporters were on the streets (as was also the case when I visited Cambridge”: the “Leave” supporters were silent and not to be seen there.

Many schools were closed for the day because they were commandeered as places for voting (oddly, the British still vote on Thursdays because, as I’ve been told, that was “Market Day” hundreds of years ago, therefore people “came to town: so much for progress and change here). If the day seemed “business as usual”, I saw some hint of the latent tensions late in the afternoon when I strolled into a polling place in Charing Cross Road. It looked like a second-hand bookshop, and apart from a few officials I was the only person. A prim woman told me that if I “did not have the right piece of paper”, I was not permitted to enter. I protested that I simply wanted to see how the British vote: she said that I might be a terrorist and simply had to leave (so I went to the opera down the road and felt part of a greater reality).

The polling closed at 10.00 pm and, to the television watcher (the coverage was less lively than we’re accustomed to in Australia) the results seemed to be declared rather slowly. But it was different from an election: it was the actual numbers that were crucial and, astonishingly early, the trend became clear. By the end, before 5.00am, it was 52% to leave, with the greatest turnout (72% in a country without compulsory voting) in more than 20 years: the political and financial leadership had been rebuffed and, before 8.30am, standing outside Number 10 in Downing Street, David Cameron – having suffered the fate which, maladroitly, he had brought on himself – announced his resignation. That was inevitable: but, curiously, he will remain as “caretaker” until the party conference in the autumn. The result will be a Tory party that is focussed, not on national problems or the negotiations with the European Union, but with their leadership battles. Not that it is more cheerful for Labour. That party also needs a new leader. Jeremy Corbyn was, plainly, conflicted during this campaign – a “Eurosceptic” he found campaigning with conviction for “Remain” was beyond him and in an interview after the result was clear, he was equivocal, pallid and deemed utterly out of his depth. As with Australia, a big section of the electorate seem to be disillusioned (or worse) with the two political power-blocs.

And if the forthcoming politics seem turbid, it is just as perplexing and concerning for the economy. The immediate result was a fall in the value of the pound and of the stock market fell by 8-10% and we were told, the banking stocks fell by 20-30%, the pound by a margin which has, allegedly not seen since 1985. The Governor of the Bank of England, Mark Carney, made an impressive and emollient speech (which was plainly directed to the markets) but words have a limited utility. The metaphorical economic storm clouds are serious for Britain.
Even the very use of that word seems problematic at present. The country is seriously divided. The out votes were 53% in England, 53% in Wales, 44% in Ulster and 38% in Scotland. No less significant is the fact that whereas certain results were expected (notably with London and the major cities strongly for “Remain”), in traditional Labour areas, notably in the north, there were strong “Leave” votes. Cameron gave the electorate the opportunity to repudiate the government, and they took it; but it was also an expression of “no confidence” in the Opposition.

So there is already serious talk about another independence referendum in Scotland (and even in Northern Ireland); Nicola Sturgeon will clearly feel emboldened. And there is, understandably concern in the European capitals: the talk about Britain not being able to “cherry-pick” the conditions of its exit. The politicians in Brussels and elsewhere do not want to encourage the waverers in the EU.

Meanwhile, though there is much brave talk in Britain – about “reclaiming independence”, or “protecting democracy” or “taking back control” – this is a step into an uncertain future. There’s a wide and exciting world out there, but a timid majority of Britons seem unwilling – or afraid – to want to live in it. It’s their choice, their risk, their lost opportunity. But as a great British writer and Divine once wrote, “No man is an island”.

Sic Gloria in Transit on Monday

Has it really been so long since there was a post here on the Mule? It would appear so and my only excuse is that I have been busy (isn’t everyone?). Even now, I have not pulled together a post myself but am once again leaning on the contributions of regular author, James Glover.

From pictures of the transit of Mercury you might think that Mercury is really close to the Sun and that is why it is so hot that lead is molten! In actual fact Mercury is about 0.4 Astronomical Units (AUs) from the Sun (Earth is about 1AU) and only receives about a 7 fold increase in sunlight intensity. So it is hot but not that hot. Mercury is about 40 solar diameters from the Sun. If the Sun were a golf ball then Mercury would be about 6 feet away and the Earth about 15 feet away. On Mercury the Sun subtends an arc of 1.4 degrees compared to 0.6 degrees on Earth.

Mercury Transit

Pictures of the Moon in front of the Earth seem to have the same effect, to me at least, of making it look much closer than it is, whereas in reality the Moon is about 30 Earth diameters away. Roughly the same “size of larger body to distance of smaller one” ratio as Mercury is from the Sun.

Moon in front of the Earth

This optical effect (modesty prevents me from giving it a name) seems to occur when photographing one astronomical body over another. It can’t be that we are using the relative sizes as a proxy for distance since Mercury/Sun is very small and Moon/Earth is relatively large. Lacking other visual clues, that a terrestrial photograph might provide, my guess is that we use the diameter of the larger body as a proxy for the distance from the smaller one. Mentally substituting  “distance across” for “distance from”. Or maybe it’s just me?

One possible explanation is that there is insufficient information in a 2D photo like this to determine the distance between the objects. But if asked “how far do you think the one in front is from the one behind?” rather than say “I can’t tell”, you choose one of the two pieces of metric information available, or some function of them, such as the average. Perhaps the brain is hardwired to always find an answer, even a wrong one, rather than admit “I don’t know”, “I have no answer” or “I have insufficient information to answer that question, Captain”. That would explain a lot of religion and politics.

Direct Action

It has been a very long time since there has been a post here on the Stubborn Mule. Even now, I have not started writing again myself but have the benefit of a return of regular guest poster, James Glover.

This is a post to explain the Australian Government’s policy called “Direct Action”. I will spare you the usual political diatribe. So here is how it works. The government has $3bn to spend on reducing carbon emissions. At a nominal cost of $15/tonne that could be 200m tonnes of Carbon.

Okay so how does it work? The government conducts a “reverse auction” in which bidders say: “I can reduce carbon emissions by X tonnes at a cost of $Y per tonne”. You work out what is the biggest reduction for the least cost. You apportion that $3bn based on the highest amount of carbon reduction. Easy peasy. That $3bn comes from government spending so ultimately from taxpayers. [Editor’s note: while not directly relevant to the direct action versus trading scheme/tax discussion, I would argue in true Modern Monetary Theory style that the Australian government is not subject to a budget constraint, beyond a self-imposed political one, and funding does not come from tax payers].

As our new PM Malcolm Turnbull says why should you have a problem with this? There is a cost and there is a reduction in carbon emissions. There will always be a cost associated with carbon reduction regardless of the method so what does it matter if this method isn’t quite the same as a Carbon Pricing systems previously advocated by the PM and his Environment Minister Greg Hunt? As long as there is a definite amount, Xm tonnes reduced.

Well here are a few thoughts:

1. if a company is currently making a profit of, say, $500m a year, producing electricity using coal fired power stations then why would they participate in this process? There is no downside. Maybe.

2. Okay it is a bit more subtle than that. Suppose the difference between the cost of producing electricity using coal or renewables works out at $15 a tonne. You might reasonably bid at $16/tonne. In reality there is a large upfront cost of converting. There is a possibility that an alternative energy provider takes that $15/tonne and uses it to subsidise their electricity cost. That could work. That encourages a coal based provider to move to renewables. But so might a coal based electricity provider at $14/tonne to undermine them. What we call a “race to the bottom”.

3. It seems to be an argument about who exactly pays for carbon pollution. Well here is the simple answer: you pay. Who else would? And you pay because, well, you use the electricity.

4. There is no easy answer to this. Which approach encourages more electricity providers to move to renewables? That is hard to say. Every solution has its downside. I decided while writing this I don’t actually care who pays. As long as carbon is reduced.

I started out thinking Turnbull was just using the excuse “as long as it works who cares?” but I have moved to the view that it doesn’t matter. All carbon reduction schemes move the cost onto the users (of course). There are many subtleties in this argument. I personally think a Cap and Trade system is the best because in a lot of ways it is more transparent. But in the end, as PM Turnbull says who cares, as long as carbon is reduced. Presumably as long as that is what really happens, eh?

The Role of Cycles in Charting the Unknown

After penning a paper on the insidious Sleeping Beauty problem last year, Giulio Katis returns to the Mule with this guest post exploring the central ideas of The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses by Eric Ries. Starting with the immediate application to business startups, Giulio develops to a broader view: dealing with uncertainty itself.

When you are about to undertake some activity, how often do you typically question what you are about to do?

If you are like me, typically you’ll just “Do it” (to quote one of Ben Stiller’s screen characters), but occasionally you’ll take the time to plan and reflect on how you can optimize what you are about to do.

We have been taught that when faced with an activity or a challenge we need to frame the problem, dissect the problem, plan a solution (if we are really clever, collaborate) and then implement. But what do we do when the problem is poorly understood, or if we can’t get the answers we need upfront? Pretend we know everything anyway? Give up? Have a stab and hope for the best?

In the context of doing business, Ries’ best-seller Lean Startup presents a systematic approach to dealing with this situation in business.

This book is part of a general trend to update traditional approaches to business management to accommodate the uncertainty and pace of change which new technology has created – which covers product, service and new capability development in most businesses today.

Ries sees the method he presents as a scientific approach to doing business that updates and complements the ideas presented in Frederick Taylor’s 1911 classic the Principles of Scientific Management (which championed the importance of analysis, planning and task specialization in business management, and influenced corporate legends like Henry Ford and Alfred Sloan).

Ries is by no means the first to do this. In any discussion on Startups today we take for granted concepts such as “disruptive technologies” and “disruptive innovation”, which have become part of our common language since Clayton Christensen wrote about them in his best-sellers The Innovator’s Dilemma and The Innovator’s Solution. And, as the name suggests, the practices of Lean Startup are to be understood in the context of the Lean approach to business process management, as pioneered by Toyota in manufacturing (which among other things challenged the assumption that optimal process engineering involved linear chains of specialised functions). Also, Lean Startup is closely related to software development practices such as Agile (with its iterative and disciplined approach to development involving continuous feedback and learning loops, as opposed to the Waterfall one-shot “gather requirements, design, build, deploy” approach) and Continuous Deployment (a process whereby all code that is written for an application is immediately deployed into production).

The basic ideas of Lean Startup, however, can be explained without references to these developed software and business management practices, and Ries does this in a simple, powerful and readable way.

Ries’ book is written primarily from the lens of a startup (which typically has to navigate extreme uncertainty with very limited resources); but as he makes clear the principles and methods are applicable to large enterprises, especially those that need to adapt to changing circumstances and operate in uncertainty in a cost-constrained manner.

Lean Startup comes from the perspective that the problem is not whether we can build or create a product, service or capability—we’ve become pretty good at building things that are well defined (perhaps part of the problem is that we’ve become so good at this); but rather the problem is what exactly should we build or create—which requires us to answer more deeply why we should create or support the things we are committed to, and question the assumptions that have been driving what we have been delivering to date.

So while many past business process management principles addressed the problem of how to optimally execute or produce and deliver a well-understood product or service, the problem Ries is solving is how a business operating in some degree of uncertainty can simultaneously explore, learn, build and service to maximise expected future value creation and/or growth in a resource constrained context.

The solution he presents deeply embeds the experimental method into the management process. In a nutshell, when developing, modifying or maintaining a product, service or capability, Lean Startup suggests we should proceed as follows:

  • explicitly identify the assumptions driving the need (opinion is not fact)
  • pick a key assumption yet to be validated
  • create a set of metrics designed to validate and explore the assumption
  • design a ‘Minimum Viable Product’ (MVP), which might be a change or an enhancement to the existing capability, that will allow us to obtain the desired metrics to test the assumption
  • build and deploy the MVP
  • collate the metrics
  • review the validity of and re-consider the assumptions and what is being developed
  • repeat

This gives rise to the mantra ‘Build-Measure-Learn’ repeated throughout the book.

This feedback loop may sound like a recipe – but Ries points out that this framework is far from a recipe. Many of the steps above require critical thinking, context specific insight, brainstorming and in some cases courage.

On the point of courage, at the end of each loop there is a critical decision to be made which Ries describes in terms of having to choose whether to persevere or pivot. Pivoting involves “a course correction designed to test a new fundamental hypothesis about the product, strategy, and engine of growth”. Under Lean Startup, pivoting is not considered as failure (involving change of management, say), but rather a necessary and important part of doing business. Not pivoting enough before the startup (or project) capital runs out is typically the cause of failure.

This gives rise to the concept of startup time as opposed to calendar time. Ries notes that typically to measure how long a startup has left we take the capital left (e.g. $1mio) and divide by the burn rate ($100k per month) to get the answer (10 months); but an alternative measure, which may tell us something more about the likelihood of the startup’s success, would be to estimate how many Build-Measure-Learn loops or possible pivots the startup could perform before running out of capital. The central practical message of the book is that the faster a startup can get through a Build-Measure-Learn loop, the more it can learn and thus the greater the chance it will succeed before funding runs out.

What is learnt is obviously a function of both the questions asked as well as the way they are answered. In terms of the answers, a key distinction Ries draws is between what he calls vanity and actionable metrics. Vanity metrics (e.g. gross turnover, gross profit) are lagging indicators that tell businesses what they want to hear (until they don’t), and do not provide information that can be used to make constructive changes. Instead of focusing on these, Ries puts forward the concept of actionable metrics which are designed to answer questions about what is actually driving customer behaviour, turnover, cost, profitability etc. For example, actionable metrics on customer behaviour might give data on how the customer joined, what was their first experience, why they are leaving or being retained. As the name suggests, they provide insight into what needs to be changed to create more value and/or growth (and obviously should be used in any business, regardless of its size or maturity).

Perhaps one of the biggest challenges Ries’ asks (of anyone running a business) is to assess yourself not in terms of the quality of the products or services you have produced, and not even in terms of the growth or profitability you have achieved to date, but to assess yourself in terms of how much you have learnt about what is driving your customers, your costs, your profitability, your growth etc. To genuinely adopt this perspective would obviously require a radical and courageous mindshift for most managers.

How the Lean Startup method can be applied in a mature, large, complex business is not something Ries spends time on (Furr and Dyer’s The Innovator’s Method: Bringing the Lean Startup into Your Organization spends more time on this question). Even though this is a non-trivial problem, it would seem even in the context of a business unit that is focused on execution and optimization (as opposed to innovation), there is scope to apply Lean Startup methods. I say this because I believe there is a degree of uncertainty (and thus the need for learning) in just about all business areas. For example, in the NPR podcast From Harvard Economist To Casino CEO (which was brought to my attention by Mark Lauer quite some time ago), Gary Loveman describes his use of randomized experiments (e.g. A/B testing) in an established casino to understand what customers liked, what they didn’t, what would make them come back if they lost a lot of money one night, etc. (Gary Loveman was well-known, amongst other things, for recognizing the value of the repeat slot players over the high rollers.)

After reading Ries I found myself asking what the implications were for (business) strategy. It is often said that strategy is easy and implementation is the hard part. Nevertheless, there is still the myth of the business leader (read Steve Jobs) that had the strategic initiatives that guided the company exactly where it needed to go. But these types of strategic initiatives are typically just informed, inspired, or lucky guesses. If, however, a business leader can orchestrate the activities of their organisation so Lean Startup principles work concurrently along with all the other business management practices needed to effectively run their organization, in theory the strategic initiatives should evolve, accumulate, be generated by and selected for as a result of the way that the organisation operates and does business (read Build-Measure-Learn loops); with bottom-up (generative) and top-down (guiding, co-ordinating) forces connected by their own feedback loops.

Ries’ book is considered by many as a must read for anyone wanting to start up a business (making a couple of the Forbes top entrepreneur and business book lists in 2014); and no doubt will be on the reading lists (if it isn’t already) of many business managers in larger organizations that need to grapple with change and innovation. It’s also a good read for anyone who is interested in what’s going on “out there” at the moment in the land of entrepreneurs and business management theory. But I think part of the reason why it resonated so strongly for me (in addition to the practical value it has for my work) was that the book is written in such a simple and powerful way as to imply applicability and meaning more broadly than for business.

The importance of feedback and cycles in the Lean Startup approach should be obvious. Mathematicians, scientists, engineers and the military have long recognized the importance of feedback as a way of dealing with uncertainty (going back to Norbert Wiener, the originator of cybernetics). In fact, Ries mentions that the Build-Measure-Learn feedback loop owes a lot to ideas from manoeuvre warfare, in particular, John Boyd’s Observe-Orient-Decide-Act Loop. But even though these ideas have been explored formally for well over a century (and, no doubt, millennia informally), it feels like we have still a long way to go in understanding the role of cycles in nature. For instance, in 2011 the Edge asked a number of prominent thinkers to answer ‘what scientific concept would improve everybody’s cognitive toolkit’. Daniel Dennett’s response (which in my opinion was one of the most thoughtful responses the Edge received to the question) was the concept of cycles.  As he ended his response: “a good rule of thumb, then, when confronting the apparent magic of the world of life and mind is: look for the cycles that are doing all the hard work”.

Fundamentally, Lean Startup is a study in how to deal with the unknown—both “known unknowns” through experimental design and measurement as well as (as much as is possible) “unknown unknowns”, through the process of continuous experimentation and exploration.  In his 200 m.p.h. (and very readable) book Sapiens: A Brief History of Humankind, Yuval Harari asks the question ‘what potential did Europe develop in the early modern period that enabled it to dominate the late modern world?’. He makes the claim that (all the good arguments of Jared Diamond’s Guns, Germs and Steel notwithstanding) one way to understand Europe’s ability to expand and dominate was in terms of its approach to the unknown, as can be seen through the development of maps. He notes that before the fifteenth century unknown or unfamiliar areas were simply left out of maps, or filled with imaginary monsters and wonders. “These maps had no empty spaces… During the fifteenth and sixteenth centuries, Europeans began to draw world maps with lots of empty spaces—one indication of the development of the scientific mindset, as well as of the European imperial drive.” I would like to know (from someone familiar with this part of history) whether the European nations that were more successful at world domination were those that were in some sense able to more quickly and more effectively cycle through Build-Measure-Learn loops.

So, on reflection, the main message I took away from Lean Startup was not something specific to just business. Rather it was the reminder that no matter how much work we do to create certainty, the unknown is all around us—and that there are more and there are less constructive ways to engage with it.

Bitcoin and the Blockchain

It’s hard to believe that a whole year has passed since I last wrote on the topic of bitcoin, and my remaining 1 bitcoin is worth rather less than it was back then. During the week I presented at the Sydney Financial Mathematics Workshop on the topic of bitcoin, taking a rather more technical look at the mechanics of the blockchain than in my previous posts here on the Mule. For those who are interested in how Satoshi Nakamoto solved the “double spend” problem, here are the slides from that presentation.

Bitcoin and the Blockchain

As part of my preparation for the presentation, I read Bitcon: The Naked Truth About Bitcoin. If you are a bitcoin sceptic, you should enjoy the book. If you are a Bitcoin true believer, you will probably hate it. It is over-blown in parts and gets a few technical details wrong, but I am increasingly convinced by the core argument of the book: the blockchain is an extraordinary innovation which may well change the way money moves around the world, but bitcoin the currency will prove to be a fad.

The New Normal

With the Intergenerational Report now released, the meme of “intergenerational theft” is spreading. Bill Mitchell has already shredded the core assumptions of the report, and now first time guest author Andrew Baume brings to the Mule brings the perspective of a financial markets practitioner to our possible future wealth. In broad strokes, he concludes

  • post-paid retirement is now the exception not the rule
  • the balance sheet that supports your retirement is now your own
  • absence of inflation is the enemy

Many older Australians have ridden the magic carpet of high levels of inflation which brought asset valuations to today’s levels. This has been particularly felt in the property market which has been the foundation for most of the “unearned” growth in asset base for anyone aged 50 or older. Property has been the asset which we have been most comfortable to leverage at incredibly high multiples.

Downsizing has liberated much of this unearned wealth and turned it into retirement income. This transfer is much like Potential Energy being transferred into Kinetic Energy. As with physics there is no perpetual motion machine so the transfer is permanent for those who do it. It is however also true they do continue to store some of the liberated KE but not into the illiquid high ticket indivisible item that the large family home generally represents. It is usually reinvested into different asset classes.

Although investing is a core competence for some, (certainly not this author!) normal people with money have found good clips of that money fall into their lap by having lived somewhere or having contributed to super. They have been comfortable in property because it “always goes up” and over their lives they have traded it as little as twice. Reinvesting the nest egg is a massive step.

Some commentators have made very strong cases for being heavily invested in risk assets post retirement whilst others have argued that timing of shocks can have a massive and unexpected effect on post retirement incomes. Both are right of course but the big impact that seems less understood it the massive shift in post-retirement income from post-paid to pre-paid and how that has fundamentally changed the return equation.

Rates of Return
Once pre-paid retirement hits its straps the clamour for return creates a dynamic associated to the paradox of thrift. As the population ages and demographic analysis promotes the concept that the welfare safety net needs to be drawn tighter, government services are reduced. The need to build a safety net for ourselves drives our return expectations lower and lower as the “best” assets (like bonds and bank shares) are bid upwards and the marginal assets assets (including higher leveraged companies and marginal property assets) benefit from that bid due to a crowding in effect. The impact of this trend has been seen most clearly in the global bond markets.

30 years ago when I started as a foreign exchange and interest rates trader the US 30 year bond was the bellwether for the health of the financial system. It was highly volatile and its movements reflected the broad market’s view of the capacity for the US to manage its (and therefore the world’s) economy. It seemed strange to me at the time that an instrument that was the projected average fed funds rate over the next 30 years would have any movement at all that related to the near term health of the economy.

Its volatility existed because in 1985 most retirement incomes were funded by the issue of that bond and others like it (by other governmental authorities) or by corporate debt in the case of corporate pension providers. Most defined benefit schemes were not close to fully funded so debt financed the pension provider’s obligations.

US 30 year Treasuries

Chart 1 – US 30 year Treasury yield
Source: Bloomberg

I wish I was smart enough to have bought a zero coupon 30 year US Government bond back then. It would have smacked me in tax over the 30 years, but I would be getting back 17.5 times the money invested this month. As a comparison the US stock index the S&P 500 is up by a factor of 11.7 times (unfair direct comparison because it is not tax adjusted).

These returns happened because the bond market was pricing way too much inflation and the equity market benefitted by there being just enough.

The dynamic in 2015 is the almost total reverse. Equity markets have lofty valuations underpinned by mediocre revenue growth, capital buy backs (as the companies can’t use the capital themselves) and bond yields that are ridiculously low as pre-paid retirement drives yields lower and lower in concert with global government policy of zero rates and Quantitative Easing (QE).

There is little doubt that it is appropriate for governments around the world to try to influence capital to take risk in the current environment. The paradox of thrift has driven risk aversion even further than the salutary lessons of the global financial crisis.

The investment profile of most companies has gone defensive with little entrepreneurialism and widespread equity buy backs. This lack of capital formation sees equity markets continue to rally on flows not earning. Consequently investors have gone massively short volatility –and has done so with one overriding reason near total absence of inflation expectation.

Gordon Gekko said “greed is good”. If greed is good and it is fed by a healthy inflation expectation. Inflation has a musky quality, like a magnificent Burgundy it needs to have a little funk, but get too funky and it spoils everything. No funk and you have strawberry cordial. Current markets are strawberry cordial and they are thus by design at the government level reacting to fear still holding the upper hand in the zeitgeist. For the next generations’ sake let’s hope we get some fear back.

Most Baby-Boomers and older have experienced the best fortune inflation can bring you. Liabilities that are nominal in nature against real assets (property and equity) are able to bring massive compounding benefits. As wages grow to match inflation (and employees advance through the ranks) the liabilities whittle away into nothings. Those conditions allowed us to regurgitate the old saying “it’s not timing the market it’s time in the market”. History is a great indicator of history, but absent inflation and with an incredibly long period of defensiveness from a capital formation standpoint it seems ever more likely that the current and next phase of markets will look like little else that has gone before.

In the last two years the Australian Index has increased circa 18% which certainly gives the lie to this argument. That suggests that the drive to low returns is a long way from over, keeping the bid in the equity market strong. It is this authors’s contention that as over 9% of that growth has occurred since the RBA decided the Australian economy was fragile enough to require a rate cut to a level not seen in my lifetime, in fact we are solidly in the execution phase of this return compression. Those of us lucky enough to have money to invest should do so now in any asset with moderate leverage and a high yield.

Once the compression of yields is more complete, the market will roll down one of three clear but quite different paths:

  1. Inflation returns moderately and gradually and allows the central banks around the world to very very slowly unwind the extraordinary accommodation so that the dividend discount model is basically unchanged (i.e. dividends rise as fast as the need to raise rates to combat overstimulation). This path is technically known as Nirvana and we all get to retire rich.
  2. Inflation does not develop for some time meaning returns remain bid down and bond markets globally provide savers with approximately zero income driving investors into earning whatever they can from “real assets”. This is a great outcome for those already invested as they benefit from the compression in returns and their living expenses remain low. The paradox of thrift keeps returns low as fear remains in the system and lack of confidence provides negative feedback loop on inflation and fuels currency devaluation wars. This path is technically known as “the Baby Boomers stealing food from their children’s (and grandchildren’s) mouths.”
  3. Inflation takes some time to develop but when it does it takes a classical monetarist predicted path and smashes valuations very hard as rates back up markedly to try to not just reverse the accommodation but put the brakes on rampant price indiscipline. This path is technically known as “Timing the market has never been so important”

Interestingly path 3 is probably the best outcome for a youngster without much yet in the market. They may have leveraged a bit and after the valuations adjustment works through they get a higher wage and the absolute (not real) level of their assets recovers based on the higher cashflow brought by inflation. Anyone who doubts this should think about their own personal balance sheet in 1973.

The Balance Sheet
When retirement was provided by either a defined benefit scheme or government pension (especially pensions for the government) the issues of timing were completely irrelevant and the investment landscape was also largely irrelevant. Many firms’ pension schemes were “underfunded” and the government did not recognise future liability at all in the budget, only the current FY expense. Someone else’s balance sheet took all the variability. This all changed once governments realised that the unfunded liability would cripple them (admittedly rating agencies had a big role to play in helping them realise this). Post 1987 crash company balance sheets also began to recognize the potentially life threatening exposure that they were taking to equities via their pension schemes.

S&P estimates that the anticipation of quantitative easing in Europe squashed bond yields so much that the liabilities of defined-benefit pension plans rose by up to 18 percent last year. Its analysis looked at the top 50 European companies it rates that have defined-benefit pension plans and are “materially underfunded,” meaning, the plans have deficits of more than 10 percent of adjusted debt, and that debt is more than 1 billion euros. In 2013, liabilities outstripped obligations for that group by more than 30 percent on average.”

Source: Bloomberg

Moves to defined contribution and superannuation guarantees are not localized Australian issues. The world has shifted and as Chart 1 shows one key outcome has been that there is more money to be potentially invested for 30 year debt-like returns than there are creditworthy borrowers who want the money.

Companies that switched to “Liability Driven Investments” to fund their pension schemes more than 7 years ago are less vulnerable. These shortfalls will force many unprepared companies to play catch up in their asset allocations.

Variability is much harder to take in a personal balance sheet when external income is no longer being received (i.e. when you are retired). It seems that extreme variations in equity markets is an inevitable consequence of the current reach for yield unless world economic growth has a strong and sustained recovery that outweighs the downward pressure on valuations from the consequent increase in bond yields. In the case of Japan, time in the market has not been your friend:

Nikkei 225

Chart 2 – Japan’s Nikkei stockmarket index
Source: Bloomberg

I have actually been generous in this chart as the vertical axis starts well after the 38,915 peak in 1989. It is a dangerous assumption that markets will behave differently in the future than they have in the past, but perhaps this chart shows an economy operating a little like the new normal for the last 20 years. It is also interesting to note the massive rally over the past 12 months has been driven by QE.

Timing of that market would have been one of the most lucrative investments possible and funnily enough it was completely predictable and transparent.

The paradox of thrift has usually been applied to emerging economies where very few social services are supplied by the state. The aging of the population combined with the shift to pre-funded retirement in an environment where social services are being pared back is creating this phenomenon in the advanced economies. This is potentially the most egregious form of intergenerational theft.

The associated absence of inflation will also tend to remove the fantastic wealth building effect of unearned capital appreciation primarily through property. This source of wealth for over 50s may be replicated by modern young parents but the scale of success that older folk have had seems unlikely.

In all but the Nirvana case outlined above the outcome seems clear. The retirement age may stay where it is but the size of the pension relative to retirement income expectations will continue to deteriorate as pension growth does not keep pace with lifestyle. People will have little choice but to continue some form of labour-driven income generation into their late sixties and most likely their 70s. Our personal balance sheet which now bears the risk will demand it.

A set and forget equity portfolio may work but also lead us into a very active post retirement game of catch up. Those with no nest egg may struggle hard to get a retirement savings pool that allows them to leave employment until well into their 70s and further may be subject to violent valuation adjustments.


Last year I wrote on a couple of occasions about the Sleeping Beauty problem. The problem raises some tricky questions and I did promise to attempt to answer the questions, which I am yet to do. Only last week, I was discussing the problem again with my friend Giulio, whose paper on the subject I published here. That discussion prompted me to go back to the inspiration for the original post: a series of posts on the Bob Walter’s blog. I re-read all of his posts, including his fourth post on the topic, which began:

I have been waiting to see some conclusions coming from discussions of the problem at the Stubborn Mule blog, however the discussion seems to have petered out without a final statement.

Sadly, even if I do get to my conclusions, I will not be able to get Bob’s reaction, because last week he died and the world has lost a great, inspirational mathematician.

Bob was my supervisor in the Honours year of my degree in mathematics and he also supervised Giulio for his PhD. Exchanging emails with Giulio this week, we both have vivid memories of an early experience of Bob’s inspirational approach to mathematics. This story may not resonate for everyone, but I can assure you that there are not many lectures from over 25 years ago that I can still recall.

The scene was a 3rd year lecture on Categories in Computer Science. Bob started talking about stacks, a very basic data structure used in computing. You should think of a stack of plates: you can put a plate on the top of the stack, or you can take one off. Importantly, you only push on or pop off plates from the top of the stack (unless you want your stack to crash to the floor). And how should a mathematician think about a stack? As Bob explained it, from the perspective of a category theorist, the key to understanding stacks is to think about pushing and popping as inverse functions. Bear with me, and I will take you through his explanation.

Rather than being a stack of plates, we will work with a stack of a particular type of data and I will denote by X the set of possible data elements (X could denote integers, strings, Booleans, or whatever data type you like). Stacks of type X will then be denoted by S. Our two key operations are push and pop.

The push operation takes an element of X and a stack and returns a new stack, which is just the old stack with the element of X added on the top. So, it’s a function push: X ×  → S. Conversely, pop is a function  → X ×  which takes a stack and returns the top element and a stack, which is everything that’s left after you pop the top.

So far, so good, but there are some edge cases to worry about. We should be able to deal with an empty stack, but what if we try to pop an element from the empty stack? That doesn’t work, but we can deal with this by returning an error state. This means that we should really think of pop as a function pop → X × I, where I is a single element set, say {ERR}. Here the + is a (disjoint) union of sets, which means that the pop function will either return a pair (an element of X and a stack) or an error state. This might be a bit confusing, so to make it concrete, imagine I have a stack s = (x1, x2, x3) then

pop((x1, x2, x3)) = (x1, (x2, x3))

and this ordered pair of data element xand (shorter) stack (x2, x3) is an element of X × S. Now if I want to pop an empty stack (), I have

pop(()) = ERR

which is in I. So pop will always either return an element of X × S or an element of I (in fact, the only element there is).

This should prompt us to revisit push as well, which should really be considered as a function push: X × S + I → S which, given an element of X and a stack will combine them, but given the single element of I will return an empty stack, so push(ERR) = ().

The key insight now is that pop and push are inverses of each other. If I push an element onto a stack and pop it again, I get back my element and the original stack. If I pop an element from a stack and push it back on, I get back my original stack. Extending these functions X × ensures that this holds true even for the edge cases.

But if push and pop are inverses then X × S + I  and S must essentially be the same—mathematically they are isomorphic. This is where the magic begins. As Bob said in is lecture, “let’s be bold like Gauss“, and he proceeded with the following calculation:

X × S + I = S

I = SX × S = S × (I – X)

S = I / (I – X)

and so

S = I + X + X2 + X3 + …

The last couple of steps are the bold ones, but actually make sense. The last equation basically says that a stack is either an empty stack, a single element of X, an ordered pair of elements of X, an ordered triple of elements of X and so on.

I’d known since high school that 1/(1-x) could be expanded to 1 +  + x2 + x3 + …, but applying this to a data structure like stacks was uncharted territory. I was hooked, and the following year I had the privilege of working with Bob on my honours thesis and that work ultimately made it into a couple of joint papers with Bob.

I haven’t seen Bob face to face for many years now, but we have occasionally kept in touch through topics of mutual interest on our blogs. While I have not kept up with his academic work, I have always considered him more than just a brilliant mathematician. He was a creative, inspirational, radical thinker who had an enormous influence on me and, I know, many others.

RFC Walters, rest in peace.

Musical Education

Musical EducationOn our longer family drives I take an old iPod crammed with even older music. Usually I take requests, and almost inevitably the children choose They Might Be Giants, and preferably the tracks Fingertips and Particle Man. But, our last trip was different. Instead I took the opportunity to the children some exposure to artists formative in the history of popular music. There is nothing like a grand plan to pass the time on the freeway.

Skimming through the albums, I decided that the best of The Jam would be a good place to start. It went down surprisingly well. Even our eldest, who generally prefers electronica, responded well to Eton Rifles. Marking that up as a success, the next choice was the best of Madness. This was more familiar territory, as they already knew (and loved) I Like Driving in My Car. Again it was successful.

Although this was a good start, it was not systematic, depending as it did on swift scanning through the albums on the iPod. So I have now begun to assemble a playlist on Spotify with a name as grandiose as its aim: Musical Education. The rules are simple but tough:

  1. Four representative tracks each (no more) are selected from major artists in the history of popular music.
  2. Each track must be from a different studio album. If the artist does not have at least four albums, refer step three. Singles not released on an album are also eligible.
  3. Single tracks can be included for important artists lacking the catalogue breadth for four essential tracks.

The playlist has nearly reached 150 tracks and includes artists such as The Doors, The Animals, James Brown and Prince. Inevitably, some choices reflect my own interests. My taste in Krautrock ensures the appearance of Kraftwerk, but in their defence I point to their appearance at the Tate and MOMA in recent years. Other choices may not have the endorsement of the artworld, but surely the sheer persistence of Mark E. Smith in continuing his post-punk aesthetic justifies a place for The Fall (Update: also The New Yorker rates The Fall highly too). As for XTC, well my own obsessions may be tilting the scales of significance. But perhaps not.

For some artists, choosing only four tracks is extremely difficult. Four David Bowie tracks…how? But rules are rules. Fortunately the toughest choice is taken away from me. The Beatles are not on Spotify, so they are ruled out on a technicality.

I have been road testing the list and there have been some surprises. The middle child has developed a strong interest in The Beach Boys, particularly God Only Knows (and that’s not just because of the BBC version), while the eldest has expressed a visceral dislike for James Brown. I did expect some bumps in the road of this musical journey: after all the boys refuse to let me play Nick Drake in the car (maybe one day they will learn they are wrong). Still, I am now getting requests for Hit the North, so something must be working.

This musical education is a work in progress, so I need help from all of you. Are there any big names I have missed? Let me know in the comments. Not all of the lists in the list are my own favourites, so I may have missed an essential track. Comments are open below, so please jump in!


Sleeping Beauty – a “halfer” approach

If you read the last post on the Sleeping Beauty problem, you may recall I did not pledge allegiance to either the “halfer” or the “thirder” camp, because I was still thinking my position through. More than a month later, I still can’t say I am satisfied. Mathematically, the thirder position seems to be the most coherent, but intuitively, it doesn’t seem quite right.

Mathematically the thirder position works well because it is the same as a simpler problem. Imagine the director of the research lab drops in to see how things are going. The director knows all of the details of the Sleeping Beauty experiment, but does not know whether today is day one or two of the experiment. Looking in, she sees Sleeping Beauty awake. To what degree should she believe that the coin toss was Heads? Here there is no memory-wiping and the problem fits neatly into standard applications of probability and the answer is 1/3.

My intuitive difficulty with the thirder is better expressed with a more extreme version of the Sleeping Beauty problem. Instead of flipping the coin once, the experimenters flip the coin 19 times. If there are 19 tails in a row (which has a probability of 1 in 524,288), Sleeping Beauty will be woken 1 million times. Otherwise (i.e. if there was at least one Heads tossed), she will only be woken once. Following the standard argument of the thirders, when Sleeping Beauty is awoken and asked for her degree of belief that the coin tosses turned up at least one Heads, she should say approximately 1/3 (or more precisely, 524287/1524287). Intuitively, this doesn’t seem right. Notwithstanding the potential for 1 million awakenings, I would find it hard to bet against something that started off as a 524287/524288 chance. Surely when Sleeping Beauty wakes up, she would be quite confident that at least one Heads came up and she is in the single awakening scenario.

Despite the concerns my intuition throws up, the typical thirder argues that Sleeping Beauty should assign 1/3 to Heads on the basis that she and the director have identical information. For example, here is an excerpt from a comment by RSM on the original post:

I want to know if halfers believe that two people with identical information about a problem, and with an identical set of priors, should assign identical probabilities to a hypothesis. I see the following possibilities:

  1. The answer is no -> could be a halfer (but not necessarily).
  2. The answer is yes, but the person holds that conditionalization is not a valid procedure –> could be a halfer.
  3. The answer is yes and the person accepts conditionalization, but does not accept that the priors for the four possibilities in the Sleeping Beauty puzzle should be equal –> could be a halfer.
  4. Otherwise, must be a thirder.

My intuition suggests, in a way I struggle to make precise, that Sleeping Beauty and the director do not in fact have identical information. All I can say is that Sleeping Beauty knows she will be awake on Monday (even if she subsequently forgets the experience), but the director may not observe Sleeping Beauty on Monday at all.

Nevertheless, option 2 raises interesting possibilities, on that have been explored in a number of papers. For example in D.J. Bradley’s “Self-location is no problem for conditionalization“, Synthese 182, 393–411 (2011), it is argued that learning about temporal information involves “belief mutation”, which requires a different approach to updating beliefs than “discovery” of non-temporal information, which makes use of conditionalisation.

All of this serves as a somewhat lengthy introduction to an interesting approach to the problem developed by Giulio Katis, who first introduced me to the problem. The Stubborn Mule may not be a well-known mathematical imprint, but I am pleased to be able to publish his paper, Sleeping Beauty, the probability of an experiment being in a state, and composing experiments, here on this site. In this post I will include excerpts from the paper, but encourage those interested in a mathematical framing of a halfer’s approach to the problem. I am sure that Giulio will welcome comments on the paper.

Giulio begins:

The view taken in this note is that the contention between halfers and thirders over the Sleeping Beauty (SB) problem arises primarily for two reasons. The first reason relates to exactly what experiment or frame of reference is being considered: the perspective of SB inside the experiment, or the perspective of an external observer who chooses to randomly inspect the state of the experiment. The second reason is that confusion persists because most thirders and halfers have not explicitly described their approach in terms of generally defining a concept such as “the probability of an experiment being in a state satisfying a property P conditional on the state satisfying property C”.

Here Giulio harks back to Bob Walters’ distinction between experiments and states. In the context of the Sleeping Beauty problem, the “experiment” is a full run from coin toss, through Monday and Tuesday, states are a particular point in the experiment and as an example, P could be a state with the coin toss being Heads and C being a state in which Sleeping Beauty is awake.

From here, Giulio goes on to describe two possible “probability” calculations. The first would be familiar to thirders and Giulio notes:

What thirders appear to be calculating is the probability that an external observer randomly inspecting the state of an experiment finds the state to be satisfying P . Indeed, someone coming to randomly inspect this modified SB problem (not knowing on what day it started) is twice as likely to find the experiment in the case where tails was tossed. This reflects the fact that the reference frame or ‘time­frame’ of this external observer is different to that of (or, shall we say, to that ‘inside’) the experiment they have come to observe. To formally model this situation would seem to require modelling an experiment being run within another experiment.

The halfer approach is then characterised as follows:

The halfers are effectively calculating as follows: first calculate for each complete behaviour of the experiment the probability that the behaviour is in a state satisfying property P; and then take the expected value of this quantity with respect to the probability measure on the space of behaviours of the experiment. Denote this quantity by ΠX(P) .

An interesting observation about this definition follows:

Note that even though at the level of each behaviour the ‘probability of being in a state satisfying P’ is a genuine probability measure, the quantity ΠX(P) is not in general a probability measure on the set of states of X . Rather, it is an expected value of such probabilities. Mathematically, it fails in general to be a probability measure because the normalization denominators n(p) may vary for each path. Even though this is technically not a probability measure, I will, perhaps wrongly, continue to call ΠX(P) a probability.

I think that this is an important observation. As I noted at the outset, the mathematics of the thirder position “works”, but typically halfers end up facing all sorts of nasty side-effects. For example, an incautious halfer may be forced to conclude that, if the experimenters tell Sleeping Beauty that today is Monday then she should update her degree of belief that the coin toss came up Heads to 2/3. In the literature there are some highly inelegant attempts to avoid these kinds of conclusions. Giulio’s avoids these issues by embracing the idea that, for the Sleeping Beauty problem, something other than a probability measure may be more appropriate for modelling “credence”:

I should say at this point that, even though ΠX(P) is not technically a probability, I am a halfer in that I believe it is the right quantity SB needs to calculate to inform her degree of ‘credence’ in being in a state where heads had been tossed. It does not seem ΞX(P) [the thirders probability] reflects the temporal or behavioural properties of the experiment. To see this, imagine a mild modification of the SB experiment (one where the institute in which the experiment is carried out is under cost pressures): if Heads is tossed then the experiment ends after the Monday (so the bed may now be used for some other experiment on the Tuesday). This experiment now runs for one day less if Heads was tossed. There are two behaviours of the experiment: one we denote by pTails which involves passing through two states S1 = (Mon, Tails), S2 = (Tue, Tails) ; and the other we denote by pHeads which involves passing through one state S3 = (Mon,Heads). Let P = {S3}, which corresponds to the behaviour pHeads . That is, to say the experiment is in P is the same as saying it is is in the behaviour pHeads. Note π(pHeads) = 1/2 , but ΞX(P) = 1/3 . So the thirders view is that the probability of the experiment being in the state corresponding to the behaviour pHeads (i.e. the probability of the experiment being in the behaviour pHeads) is actually different to the probability of pHeads occurring!

This halfer “probability” has some interesting characteristics:

There are some consequences of the definition for ΠX(P) above that relate to what some thirders claim are inconsistencies in the halfers’ position (to do with conditioning). In fact, in the context of calculating such probabilities, a form of ‘interference’ can arise for the series composite of two experiments (i.e. the experiment constructed as ‘first do experiment 1, then do experiment 2’), which does not arise for the probabilistic join of two experiments (i.e. the experiment constructed as ‘with probability p do experiment 1, with probability 1-­p do experiment 2’).

In a purely formal manner (and, of course, not in a deeper physical sense) this ‘non­locality’, and the importance of defining the starting and ending states of an experiment when calculating probabilities, reminds me of the interference of quantum mechanical experiments (as, say, described by Feynman in the gem of a book QED). I have no idea if this formal similarity has any significance at all or is completely superficial.

Giulio goes on to make an interesting conjecture about composition of Sleeping Beauty experiments:

We could describe this limiting case of a composite experiment as follows. You wake up in a room with a white glow. A voice speaks to you. “You have died, and you are now in eternity. Since you spent so much of your life thinking about probability puzzles, I have decided you will spend eternity mostly asleep and only be awoken in the following situations. Every Sunday I will toss a fair coin. If the toss is tails, I will wake you only on Monday and on Tuesday that week. If the toss is heads, I will only wake you on Monday that week. When you are awoken, I will say exactly the same words to you, namely what I am saying now. Shortly after I have finished speaking to you, I will put you back to sleep and erase the memory of your waking time.” The voice stops. Despite your sins, you can’t help yourself, and in the few moments you have before being put back to sleep you try to work out the probability that the last toss was heads. What do you decide it is?

In this limit, Giulio argues that a halfer progresses to the thirder position, assigning 1/3 to the probability that the last toss was heads!

These brief excerpts don’t do full justice to the framework Giulio has developed, but I do consider it a serious attempt to encompass all of the temporal/non-temporal, in-experiment/out-of-experiment subtleties that the Sleeping Beauty problem throws up. This paper is only for the mathematically inclined and, like so much written on this subject, I doubt it will convince many thirders, but if nothing else I hope it will put Giulio’s mind at rest having the paper published here on the Mule. Over recent weeks, his thoughts have been as plagued by this problem as have mine.

Sleeping Beauty

Sleeping BeautyFor the last couple of weeks, I have fallen asleep thinking about Sleeping Beauty. Not the heroine of the Charles Perrault fairy tale, or her Disney descendant, but the subject of a thought experiment first described in print by philosopher Adam Elga as follows:

Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking. When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

Elga, A. “Self‐locating belief and the Sleeping Beauty problem”, Analysis 60, 143–147 (2000)

It has become traditional to add that Sleeping Beauty is initially put to sleep on Sunday and is either woken up on Monday (Heads) or Monday and Tuesday (Tails). Then on Wednesday she is woken for the final time and the experiment is over. She knows in advance exactly what is going to take place, believes the experimenters and trusts that the coin is fair.

Much like the Monty Hall problem, Sleeping Beauty has stirred enormous controversy. There are two primary schools of thought on this problem. The thirders and the halfers. Both sides have a broad range of arguments, but put simply they are as follows.

Halfers argue that the answer is 1/2. On Sunday Sleeping Beauty believed that the chance of Heads was 1/2, she has learned nothing new when waking and so the chances are still 1/2.

Thirders argue that the answer is 1/3. If the experiment is repeated over and over again, approximately 1/3 of the time she will wake up after Heads and 2/3 of the time she will wake up after tails.

I first came across this problem myself when a friend alerted me to a blog post by my former supervisor Bob Walters, who describes the thirder position as an “egregious error”. But as Bob notes, there are many in the thirder camp, including Adam Elga himself, physicist Sean Carroll and statistician Jeffrey Rosenthal.

As for my own view, I will leave you in suspense for now, mainly because I’m still thinking it through. Although superficially similar, I believe that it is a far more subtle problem than the Monty Hall problem and poses challenges to what it means to move the pure mathematical theory of probability to a real world setting. Philosophers distinguish between the mathematical concept of “probability” and real world “credence”, a Bayesian style application of probability to real world beliefs. I used to think that this was a bit fanciful on the part of philosophers. Now I am not sure sure: applying probability is harder than it looks.

Let me know what you think!

Image Credit: Serena-Kenobi