It is high time, however, that we take our ignorance more seriously – Friedrich Hayek
Nobel laureate Paul Krugman recently claimed in the Paul vs. Paul debate that unmanaged economies are susceptible to damaging volatility. Personally I see the issue a bit differently; an unmanaged economy is subject to manageable volatility while managed economy is subject to unmanageable volatility. Moreover, now that JP Morgan’s “hedging” mishap has carried on the tradition of massive bank trading blowups and Spain’s Bankia collapse has once again initiated the summer of bank bailouts (not quite the summer of recovery, a term coined by Geithner) and given us a great example of managed stability, I thought it would be apt to connect few dots.
In managed economies a false sense of stability is created as the central planners in their collective wisdom come to believe that they have devised a perfect system without faults, but in reality volatility is gaining steam below the surface as false sense of security is developed. And as the system approaches the breaking point, the central planners will actively refuse to believe that they would not be able to contain it perfectly. Eventually the volatility gushes over board and perhaps reaches the point of disturbance in civil order. It reaches the point where you can only contain it by essentially suspending civil rights; think of Henry Paulson telling the Congress there will be tanks on the street or EU’s visionary leaders assuring you of the greatness of ESM, an institution so important it must be exempted from all laws and whose passage was recently advocated by our finance minister Ms. Urpilainen by invoking the Pelosi-esque “we need to pass it in order to know what is in it” argument. This is what happens when information and price signals are actively suppressed in a managed economy, a creation so atrocious only its mother could love. Take Bankia as another example; how on earth did perfectly “solvent bank” suddenly need 20 billion? If your first though was accounting fraud and intentionally waiting to dump the losses on taxpayers, you were correct. But remember that in our lovely centrally planned, carefully regulated and managed stability, ravaging volatility just cannot happen. Don’t think about it too hard though, otherwise you might notice that it is completely untrue. Where were the regulators on this? What was ECB doing? Or Bank of Spain, the government of Spain, the many regulators in Spain, private sources? Personally, I have lost count on how many different bodies of government there are in Europe and US that have it in their mission statement to provide stability in the financial system. Could it possibly be a time to acknowledge that managed economy creating stability cannot co-exist with an instance of bank going from perfectly solvent and “customers should have no troubles” to total collapse, nationalization and we need 20 billion overnight? I could of course also point out how bailing out banks worldwide makes them simply predate on taxpayers on an ever larger scale and completely ignore the risk side of the ledger in their business to the point of an outright accounting fraud and that it is almost as if there has been quite a few people telling that not fixing problems leads to them not being fixed. I know this, because Bankia, just like Dexia, is an earlier result of a government bailout. Fact is truly stranger than fiction.
In an unmanaged economy, information surfaces in a timely manner and the economic agents adjust their decisions accordingly. Occasionally some ripples develop, but they deflate from much lower extremities and do not pose risks to the economy or civil order, because the foundation is solid. Nothing but creative destruction will take place as resources are removed, for the most part, from the least economically viable and sensible projects and are organically directed towards the most viable projects. This is precisely what we should want to happen and among many other things “making it impossibly expensive to fire people” is just not going to help.
This notion about the difference between unmanaged and managed economy and artificial suppression extends far beyond just economics. In fact economics is just a minor area of application; It governs many societal level events, foreign policy as discussed for example by Taleb (2011) in Foreign Affairs, on a personal level, of which we once again received some discomforting evidence, and even in many phenomena in natural sciences. 2008 was not quite a Yellowstone event yet, but if we continue to “rhyme with history”, as Mark Twain quipped, eventually we will get it right – or very, very wrong. Artificially depressing volatility and then refuting the possibility of problems is not a policy that leads to stability in any field of study. It is and will continue to be associated with strong reactions that come unexpected and unfortunately the worst is often the part that is tied to the changes enabled by the massive volatility. Therefore we can reformulate the postulate from the header to the more general form: unmanaged complex systems are subject to manageable volatility while managed complex systems are subject to unmanageable volatility. Think of the collapse of the Soviet system or the Arab Spring. For a more elaborate formulation, see Black Swan, 2007.
With freedom comes some unpredictable fluctuation. This is one of life’s packages: there is no freedom without noise – and no stability without volatility – Nassim Nicholas Taleb
With the aforementioned premise in mind let us consider the curious case of derivatives. The basic premise is simply to transfer the risk that originates from ownership of assets without transferring the actual underlying asset. Ergo, derivatives help in managing risk and probably lower aggregate risk in the economic system provided that few qualifiers are met. Moreover, this probably even qualifies as a proper innovation.
In capitalist system, we need to know who owns the assets so that we know approximately who is going to incur the cash flows and who bears the risk from various events. Enforcing this is very beneficial and the absence of this easily leads to fraud. In fact, I do not consider a system of secret ownership to be part of capitalism; the essence of which is the legal, enforceable connection created between persons and assets, a one that is as much recognized and protected as it is transparent and public.
When we introduce derivatives into this picture, there is another layer of instruments to consider before we know who incurs the cash flows and who bears the risk. And of course you can form naked positions and speculate instead of hedge. It is very important that this stays transparent because otherwise economic activity becomes very elastic to the downside the minute something really goes wrong. If the underwriting and ownership of derivatives instruments is completely opaque then it becomes very hard to properly analyze the outcomes of any shocks. In fact the very notion that we probably do not know and that someone might have accumulated a significant risk position is enough to make the financial markets pretty frail. Given opaqueness, there exists an easy way to accumulate risk in a fashion that is completely out of line with generally held beliefs in the marketplace and therefore gives rise to massive volatility once the truth comes out.
We have the option to just let this happen and then rely on central planning firefighters when something goes wrong or we could follow basic principles of capitalism – if you want to own something, you do so with your name and you do not create artificial credit without limits and essentially without the explicit legal right to do so.
Conveniently there is a new dissertation just out from my school that strongly advocates this firefighter premise – somewhat unfortunately I might add. According to the work, economic and financial crises must be handled with a collective responsibility. Needless to say, I disagree; these viewpoints seem to derive their power from the need to validate what has been carried out by the political establishment rather than to contribute to scientific knowledge. In my humble opinion, it is quite unfortunate that we give more ammunition and validation to those already fully submersed in their own hubris of central planning and god complex. Collective responsibility and guarantee acts as the source of the problem or as an accelerant, not as a solution.
Similar claims have of course been made for a long time, for example by Allen and Gale (1998), Greenspan (2004), Caballero and Krishnamurthy (2008), Keynesians in general where public sector deficits fund increased welfare claims and stimulus spending, monetarists in general where lender-of-last-resort incentivizes against betting for further decline, and by the biggest group of welfare queens, the banking industry, but most curiously only when their bonuses are threatened.
On the other side of the aisle, one might want to point out the many crises of the free banking era or the long depression of 1870s. However, quite a few perfectly free market compliant developments have taken place since, for example information flow is now instant everywhere, institutional framework is much more robust (albeit once again in decline) and transactions costs are in numerous of instances drastically lower.
It is always a bit a bad when you have the cause and effect mixed; it is the collective guarantee that is causing the problems, together with our incomplete and faltering institutional framework and monetary structure that is built on sand. To use the expression from Fisher (1933) it is libel against free market economics to claim that crises are handled with collective guarantees. Fisher’s (1933) Debt-Deflation Theory is nonetheless quite interesting read given the last few years. Among other things he explicitly advocates a creation of separate stabilization commission. In his words: “If the debt-deflation theory of great depressions is essentially correct, the question of controlling the price level assumes a new importance; and those in the drivers’ seats – the Federal Reserve Board and the Secretary of the Treasury, or, let us hope, a special stabilization commission – will in future be held to a new accountability.” This is of course, as already established, almost complete folly, but it is interesting that the misconceptions about the interrelations of stability, imperfect knowledge and human action run pretty deep in our thinking.
JP Morgan’s woes
Let us look at JPM’s problems as an example of a pretty isolated incident that nonetheless seems to continue a pattern. And as Hayek (1964) wrote, it is a re-cognition of some regularity, of some similar feature in otherwise different circumstances, which makes us wonder and ask “why?” Our minds are so made that when we notice such regularity in diversity we suspect the presence of the same agent and become curious to detect it. It is to this trait of our minds that we owe whatever understanding and mastery of our environment we have achieved.
Even at this point it is hard to say whether this was about actively engaging in proprietary trading or risk management gone wrong due to the nature of risk that was being hedged, complexity of the structured instruments involved in said activity and the sheer magnitude of the “hedging operations”, and perhaps the true state of things will remain a mystery far into the future. Banks with federally insured deposits are of course prohibited from proprietary trading as per Volcker Rule. Few issues, however, can be inferred from this episode.
Firstly, I do not believe this was about risk management; at best risk management completely misunderstood. It was also nice to see that Mr. Dimon attributed the losses to himself and stupidity within the organization.
Secondly, this incident probably got blown out of proportion unless JPM still continues to misrepresent key issues. At the end of the day, they had an un-hedged directional bet and lost a great deal of money and that’s it.
On the negative side of things, Jamie Dimon either actively misrepresented the financials of the bank or he genuinely did not know and still does not know what is going down at JPM. Being aware of many other incidents from Wall Street in the past, both possibilities seem perfectly plausible. Also, it is pretty hard to say which is worse.
Risk management is not that straight-forward when the sums grow into hundreds of billions of dollars and you are supposed to hedge tail-risk with instruments that are illiquid and not a very good match with the source of the risk. This is reality. The real problem, however, arises when these entities present their financials to investors and claim that their VaRs are in the 50 million dollar range. And it goes full retard when the organization that created the measure itself loses enough money in very short period of time to stratospherically exceed their reported VaR; in this case their Q1 trading Var was $63 million. But were you to ask this from Mr. PhD Merton, he would still tell you that models always work and they do so in a normally distributed fashion; why else would JPM hire a former LTCM guy to head their “hedging” department?
Then some academic wizardry; Value-at-Risk is of course intuitively rather compelling metric despite its apparent faults. Jorion (2002) finds them to be very informative for investors. Berkowitz and O’Brien (2002) find reported VaRs to be conservative and in some instances highly inaccurate. They are also not really value-adding compared to simpler models. Linsmeier and Pearson (2000) conclude that VaR is not a panacea; it is based on assumptions that are rarely met and most importantly assumes future is like the past. Moreover, if you like the measure, you do not want to know what Taleb thinks about it. The real problems, however, are not these figures; they tell you precisely the result of the formula used to calculate them. The real issues are related to systematic misrepresentation or ignorance.
The fallacy of being in control is a very big problem when it is allowed to fester under the surface. Taking a couple of cues from Taleb, it is somewhat amusing to read about instances where those who have made slightly larger than normal losses think the problem is rectified by changing the model, i.e. including couple more parameters or tweaking the slope. The problem is that your entire paradigm of thinking is wrong; your model is just fine.
There is a big problem in forcing mathematics into economics; at its best economics is philosophy and pure logical argumentation, at its worst, its mathematics. Samuelson et al have done great disservice to economics with the endeavor they started. Hayek elaborated on this in his Nobel acceptance speech in 1974 by stating that “It seems to me that this failure of the economists to guide policy more successfully is closely connected with their propensity to imitate as closely as possible the procedures of the brilliantly successful physical sciences – an attempt which in our field may lead to outright error. It is an approach which has come to be described as the “scientistic” attitude – an attitude which, as I defined it some thirty years ago, ´is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed.´” Real science, think physics, revolves around constants. Economics, however, does not have constants, it has arguments. In science, arguments do not have validity; a chemical reaction formula does not concern itself with your feelings, it only works in the one correct way. Laws of motion are, because they just are, not because someone formulates a compelling hypothesis and argument. In social “sciences”, however, argument is everything. In fact, arguments create new “scientific” knowledge and even reality, because there are no constants, only a group of humans that have some set of beliefs, perceptions, wants and desires. The “constants” in this case can be manipulated to be just about anything.
George Soros recently made the following remarks at the Festival of Economics in Trento, Italy: “I believe that the failure is more profound than generally recognized. It goes back to the foundations of economic theory. Economics tried to model itself on Newtonian physics. It sought to establish universally and timelessly valid laws governing reality. But economics is a social science and there is a fundamental difference between the natural and social sciences. Social phenomena have thinking participants who base their decisions on imperfect knowledge. That is what economic theory has tried to ignore.”
“Scientific method needs an independent criterion, by which the truth or validity of its theories can be judged. Natural phenomena constitute such a criterion; social phenomena do not. That is because natural phenomena consist of facts that unfold independently of any statements that relate to them. The facts then serve as objective evidence by which the validity of scientific theories can be judged. That has enabled natural science to produce amazing results.” It is easy to agree with everything Mr. Soros stated.
Moreover, the answer to the question “What is the appropriate function and form of the government?” is something that keeps the nature of social sciences very different from physics, because it alters your perception of reality quite a bit on top of which there is a reflexive process between the cognitive function of how you see the reality and the participating function of how you would like to see the reality, something that Soros (2006) discusses in The Age of Fallibility, and elsewhere in what he calls his personal framework for understanding the world.
The ideas of economists and political philosophers both when they are right and when they are wrong are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence are usually the slaves of some defunct economist – John Maynard Keynes
When you force mathematics into economics, you are bound to become the slave of your own misconceptions. The formulas are not accurate prediction tools, just approximations of past correlations and should be relegated to serve in that function, a tool of descriptive statistics. It is certainly true that quantitative finance is a big deal. However, the fact that a set of trading algorithms work is not a validation of the formulas in any scientific sense. Sure they might make a great deal of money and are therefore validated in a business sense, but when it comes to science, the prediction must be 100 per cent accurate, otherwise it is just an imitation of the past. Hayek (1964) argues in his Theory of Complex Phenomena that: “The phenomena in which we are interested, such as competition, could not occur at all unless the number of distinct elements involved were fairly large, and that the overall pattern that will form itself is determined by the significantly different behavior of the different individuals so that the obstacle of obtaining the relevant data cannot be overcome by treating them as members of a statistical collective. For this reason economic theory is confined to describing kinds of patterns which will appear if certain general conditions are satisfied, but can rarely if ever derive from this knowledge any predictions of specific phenomena.”
Mathematics is the language of science; it is very exact, economics is anything but. Take the Okun’s “law” for example; it is something that the usual suspects look at pretty closely. It details changes in GDP and unemployment. Over time, we come to fine-tune the formula to best reflect past correlation. Recently, however, the formula has broken down completely and of course economists are busy reworking the formula to take into account something new as if that was the problem.
Economics even cherishes something called ceteris paribus, which means that if you are studying a complicated system consisting of a, b, c, d, e, f, g and h sub-factors, you can just ignore everything else except a in order to understand how a plays a part in the whole thing. That is a bit of an exaggeration, but it is this type of thinking that leads people to conclude that they can centrally plan complex systems. Hayek (1964) had this to say in his theory on the matter: “The statistical method is therefore of use only where we either deliberately ignore, or are ignorant of, the relations between the individual elements with different attributes, i.e., where we ignore or are ignorant of any structure into which they are organized. Statistics in such situations enables us to regain simplicity and to make the task manageable by substituting a single attribute for the unascertainable individual attributes in the collective. It is, however, for this reason irrelevant to the solution of problems in which it is the relations between individual elements with different attributes which matters.”
It is probably – and perhaps somewhat unfortunately – the mind of man that pushes him to extract as much as possible from social sciences to be then described as exact mathematical relationships no matter how frail in robustness, because given the existence of these tools in the realm of natural sciences, man cannot resist the temptation to veer out of the abstract and into the definitive, and therefore becomes the slave of his own misconceptions even though no less of an authority than Karl Popper advocated precisely this premise.
There could be a little renaissance over these topics in the aftermath of this depression of the 21st century; there certainly should be one. Something positive could emerge from our current intellectual despair revolving this financial crisis, but very little has been achieved so far. I like to think we are at some advanced stages of Kuhn’s scientific crisis where heterodox views gain so much clout through both positive confirmation no matter whether anecdotal or fully statistically relevant and through the crisis of confidence amongst the orthodox school that continues to suffer setbacks as their central planning conglomerate makes them look ever more incompetent that we are bound to leave behind some of the now archaic notions and replace them with something new.