Hayek, Radner and Rational-Expectations Equilibrium

In revising my paper on Hayek and Three Equilibrium Concepts, I have made some substantial changes to the last section which I originally posted last June. So I thought I would post my new updated version of the last section. The new version of the paper has not been submitted yet to a journal; I will give a talk about it at the colloquium on Economic Institutions and Market Processes at the NYU economics department next Monday. Depending on the reaction I get at the Colloquium and from some other people I will send the paper to, I may, or may not, post the new version on SSRN and submit to a journal.

In this section, I want to focus on a particular kind of intertemporal equilibrium: rational-expectations equilibrium. It is noteworthy that in his discussions of intertemporal equilibrium, Roy Radner assigns a  meaning to the term “rational-expectations equilibrium” very different from the one normally associated with that term. Radner describes a rational-expectations equilibrium as the equilibrium that results when some agents can make inferences about the beliefs of other agents when observed prices differ from the prices that the agents had expected. Agents attribute the differences between observed and expected prices to the superior information held by better-informed agents. As they assimilate the information that must have caused observed prices to deviate from their expectations, agents revise their own expectations accordingly, which, in turn, leads to further revisions in plans, expectations and outcomes.

There is a somewhat famous historical episode of inferring otherwise unknown or even secret information from publicly available data about prices. In 1954, one very rational agent, Armen Alchian, was able to identify which chemicals were being used in making the newly developed hydrogen bomb by looking for companies whose stock prices had risen too rapidly to be otherwise explained. Alchian, who spent almost his entire career at UCLA while moonlighting at the nearby Rand Corporation, wrote a paper at Rand listing the chemicals used in making the hydrogen bomb. When news of his unpublished paper reached officials at the Defense Department – the Rand Corporation (from whose files Daniel Ellsberg took the Pentagon Papers) having been started as a think tank with funding by the Department of Defense to do research on behalf of the U.S. military – the paper was confiscated from Alchian’s office at Rand and destroyed. (See Newhard’s paper for an account of the episode and a reconstruction of Alchian’s event study.)

But Radner also showed that the ability of some agents to infer the information on which other agents are causing prices to differ from the prices that had been expected does not necessarily lead to an equilibrium. The process of revising expectations in light of observed prices may not converge on a shared set of expectations of future prices based on common knowledge. Radner’s result reinforces Hayek’s insight, upon which I remarked above, that although expectations are equilibrating variables there is no economic mechanism that tends to bring expectations toward their equilibrium values. There is no feedback mechanism, corresponding to the normal mechanism for adjusting market prices in response to perceived excess demands or supplies, that operates on price expectations. The heavy lifting of bringing expectations into correspondence with what the future holds must be done by the agents themselves; the magic of the market goes only so far.

Although Radner’s conception of rational expectations differs from the more commonly used meaning of the term, his conception helps us understand the limitations of the conventional “rational expectations” assumption in modern macroeconomics, which is that the price expectations formed by the agents populating a model should be consistent with what the model itself predicts that those future prices will be. In this very restricted sense, I believe rational expectations is an important property of any model. If one assumes that the outcome expected by agents in a model is the equilibrium predicted by the model, then, under those expectations, the solution of the model ought to be the equilibrium of the model. If the solution of the model is somehow different from what agents in the model expect, then there is something really wrong with the model.

What kind of crazy model would have the property that correct expectations turn out not to be self-fulfilling? A model in which correct expectations are not self-fulfilling is a nonsensical model. But there is a huge difference between saying (a) that a model should have the property that correct expectations are self-fulfilling and saying (b) that the agents populating the model understand how the model works and, based know their knowledge of the model, form expectations of the equilibrium predicted by the model.

Rational expectations in the first sense is a minimal consistency property of an economic model; rational expectations in the latter sense is an empirical assertion about the real world. You can make such an assumption if you want, but you can’t credibly claim that it is a property of the real world. Whether it is a property of the real world is a matter of fact, not a methodological imperative. But the current sacrosanct status of rational expectations in modern macroeconomics has been achieved largely through methodological tyrannizing.

In his 1937 paper, Hayek was very clear that correct expectations are logically implied by the concept of an equilibrium of plans extending through time. But correct expectations are not a necessary, or even descriptively valid, characteristic of reality. Hayek also conceded that we don’t even have an explanation in theory of how correct expectations come into existence. He merely alluded to the empirical observation – perhaps not the most faithful description of empirical reality in 1937 – that there is an observed general tendency for markets to move toward equilibrium, implying that, over time, expectations somehow do tend to become more accurate.

It is worth pointing out that when the idea of rational expectations was introduced by John Muth (1961), he did so in the context of partial-equilibrium models in which the rational expectation in the model was the rational expectation of the equilibrium price in a particular market. The motivation for Muth to introduce the idea of a rational expectation was the cobweb-cycle model in which producers base current decisions about how much to produce for the following period on the currently observed price. But with a one-period time lag between production decisions and realized output, as is the case in agricultural markets in which the initial application of inputs does not result in output until a subsequent time period, it is easy to generate an alternating sequence of boom and bust, with current high prices inducing increased output in the following period, driving prices down, thereby inducing low output and high prices in the next period and so on.

Muth argued that rational producers would not respond to price signals in a way that led to consistently mistaken expectations, but would base their price expectations on more realistic expectations of what future prices would turn out to be. In his microeconomic work on rational expectations, Muth showed that the rational-expectation assumption was a better predictor of observed prices than the assumption of static expectations underlying the traditional cobweb-cycle model. So Muth’s rational-expectations assumption was based on a realistic conjecture of how real-world agents would actually form expectations. In that sense, Muth’s assumption was consistent with Hayek’s conjecture that there is an empirical tendency for markets to move toward equilibrium.

So, while Muth’s introduction of the rational-expectations hypothesis was an empirically progressive theoretical innovation, extending rational-expectations into the domain of macroeconomics has not been empirically progressive, rational-expectations models having consistently failed to generate better predictions than macro-models using other expectational assumptions. Instead, a rational-expectations axiom has been imposed as part of a spurious methodological demand that all macroeconomic models be “micro-founded.” But the deeper point – one that Hayek understood better than perhaps anyone else — is that there is a difference in kind between forming rational expectations about a single market price and forming rational expectations about the vector of n prices on the basis of which agents are choosing or revising their optimal intertemporal consumption and production plans.

It is one thing to assume that agents have some expert knowledge about the course of future prices in the particular markets in which they participate regularly; it is another thing entirely to assume that they have knowledge sufficient to forecast the course of all future prices and in particular to understand the subtle interactions between prices in one market and the apparently unrelated prices in another market. It is those subtle interactions that allow the kinds of informational inferences that, based on differences between expected and realized prices of the sort contemplated by Alchian and Radner, can sometimes be made. The former kind of knowledge is knowledge that expert traders might be expected to have; the latter kind of knowledge is knowledge that would be possessed by no one but a nearly omniscient central planner, whose existence was shown by Hayek to be a practical impossibility.

The key — but far from the only — error of the rational-expectations methodology that rules modern macroeconomics is that rational expectations somehow cause or bring about an intertemporal equilibrium. It is certainly a fact that people try very hard to use all the information available to them to predict what the future has in store, and any new bit of information not previously possessed will be rapidly assessed and assimilated and will inform a possibly revised set of expectations of the future. But there is no reason to think that this ongoing process of information gathering and processing and evaluation leads people to formulate correct expectations of the future or of future prices. Indeed, Radner proved that, even under strong assumptions, there is no necessity that the outcome of a process of information revision based on the observed differences between observed and expected prices leads to an equilibrium.

So it cannot be rational expectations that leads to equilibrium, On the contrary, rational expectations are a property of equilibrium. To speak of a “rational-expectations equilibrium” is to speak about a truism. There can be no rational expectations in the macroeconomic except in an equilibrium state, because correct expectations, as Hayek showed, is a defining characteristic of equilibrium. Outside of equilibrium, expectations cannot be rational. Failure to grasp that point is what led Morgenstern astray in thinking that Holmes-Moriarty story demonstrated the nonsensical nature of equilibrium. It simply demonstrated that Holmes and Moriarity were playing a non-repeated game in which an equilibrium did not exist.

To think about rational expectations as if it somehow results in equilibrium is nothing but a category error, akin to thinking about a triangle being caused by having angles whose angles add up to 180 degrees. The 180-degree sum of the angles of a triangle don’t cause the triangle; it is a property of the triangle.

Standard macroeconomic models are typically so highly aggregated that the extreme nature of the rational-expectations assumption is effectively suppressed. To treat all output as a single good (which involves treating the single output as both a consumption good and a productive asset generating a flow of productive services) effectively imposes the assumption that the only relative price that can ever change is the wage, so that all but one future relative prices are known in advance. That assumption effectively assumes away the problem of incorrect expectations except for two variables: the future price level and the future productivity of labor (owing to the productivity shocks so beloved of Real Business Cycle theorists).

Having eliminated all complexity from their models, modern macroeconomists, purporting to solve micro-founded macromodels, simply assume that there are just a couple of variables about which agents have to form their rational expectations. The radical simplification of the expectational requirements for achieving a supposedly micro-founded equilibrium belies the claim to have achieved anything of the sort. Whether the micro-foundational pretense affected — with apparently sincere methodological fervor — by modern macroeconomics is merely self-delusional or a deliberate hoax perpetrated on a generation of unsuspecting students is an interesting distinction, but a distinction lacking any practical significance.

Four score years since Hayek explained how challenging the notion of intertemporal equilibrium really is and the difficulties inherent in explaining any empirical tendency toward intertempral equilibrium, modern macroeconomics has succeeded in assuming all those difficulties out of existence. Many macroeconomists feel rather proud of what modern macroeconomics has achieved. I am not quite as impressed as they are.

 

Advertisements

On Equilibrium in Economic Theory

Here is the introduction to a new version of my paper, “Hayek and Three Concepts of Intertemporal Equilibrium” which I presented last June at the History of Economics Society meeting in Toronto, and which I presented piecemeal in a series of posts last May and June. This post corresponds to the first part of this post from last May 21.

Equilibrium is an essential concept in economics. While equilibrium is an essential concept in other sciences as well, and was probably imported into economics from physics, its meaning in economics cannot be straightforwardly transferred from physics into economics. The dissonance between the physical meaning of equilibrium and its economic interpretation required a lengthy process of explication and clarification, before the concept and its essential, though limited, role in economic theory could be coherently explained.

The concept of equilibrium having originally been imported from physics at some point in the nineteenth century, economists probably thought it natural to think of an economic system in equilibrium as analogous to a physical system at rest, in the sense of a system in which there was no movement or in the sense of all movements being repetitive. But what would it mean for an economic system to be at rest? The obvious answer was to say that prices of goods and the quantities produced, exchanged and consumed would not change. If supply equals demand in every market, and if there no exogenous disturbance displaces the system, e.g., in population, technology, tastes, etc., then there would seem to be no reason for the prices paid and quantities produced to change in that system. But that conception of an economic system at rest was understood to be overly restrictive, given the large, and perhaps causally important, share of economic activity – savings and investment – that is predicated on the assumption and expectation that prices and quantities not remain constant.

The model of a stationary economy at rest in which all economic activity simply repeats what has already happened before did not seem very satisfying or informative to economists, but that view of equilibrium remained dominant in the nineteenth century and for perhaps the first quarter of the twentieth. Equilibrium was not an actual state that an economy could achieve, it was just an end state that economic processes would move toward if given sufficient time to play themselves out with no disturbing influences. This idea of a stationary timeless equilibrium is found in the writings of the classical economists, especially Ricardo and Mill who used the idea of a stationary state as the end-state towards which natural economic processes were driving an an economic system.

This, not very satisfactory, concept of equilibrium was undermined when Jevons, Menger, Walras, and their followers began to develop the idea of optimizing decisions by rational consumers and producers. The notion of optimality provided the key insight that made it possible to refashion the earlier classical equilibrium concept into a new, more fruitful and robust, version.

If each economic agent (household or business firm) is viewed as making optimal choices, based on some scale of preferences, and subject to limitations or constraints imposed by their capacities, endowments, technologies, and the legal system, then the equilibrium of an economy can be understood as a state in which each agent, given his subjective ranking of the feasible alternatives, is making an optimal decision, and each optimal decision is both consistent with, and contingent upon, those of all other agents. The optimal decisions of each agent must simultaneously be optimal from the point of view of that agent while being consistent, or compatible, with the optimal decisions of every other agent. In other words, the decisions of all buyers of how much to purchase must be consistent with the decisions of all sellers of how much to sell. But every decision, just like every piece in a jig-saw puzzle, must fit perfectly with every other decision. If any decision is suboptimal, none of the other decisions contingent upon that decision can be optimal.

The idea of an equilibrium as a set of independently conceived, mutually consistent, optimal plans was latent in the earlier notions of equilibrium, but it could only be coherently articulated on the basis of a notion of optimality. Originally framed in terms of utility maximization, the notion was gradually extended to encompass the ideas of cost minimization and profit maximization. The general concept of an optimal plan having been grasped, it then became possible to formulate a generically economic idea of equilibrium, not in terms of a system at rest, but in terms of the mutual consistency of optimal plans. Once equilibrium was conceived as the mutual consistency of optimal plans, the needlessly restrictiveness of defining equilibrium as a system at rest became readily apparent, though it remained little noticed and its significance overlooked for quite some time.

Because the defining characteristics of economic equilibrium are optimality and mutual consistency, change, even non-repetitive change, is not logically excluded from the concept of equilibrium as it was from the idea of an equilibrium as a stationary state. An optimal plan may be carried out, not just at a single moment, but over a period of time. Indeed, the idea of an optimal plan is, at the very least, suggestive of a future that need not simply repeat the present. So, once the idea of equilibrium as a set of mutually consistent optimal plans was grasped, it was to be expected that the concept of equilibrium could be formulated in a manner that accommodates the existence of change and development over time.

But the manner in which change and development could be incorporated into an equilibrium framework of optimality was not entirely straightforward, and it required an extended process of further intellectual reflection to formulate the idea of equilibrium in a way that gives meaning and relevance to the processes of change and development that make the passage of time something more than merely a name assigned to one of the n dimensions in vector space.

This paper examines the slow process by which the concept of equilibrium was transformed from a timeless or static concept into an intertemporal one by focusing on the pathbreaking contribution of F. A. Hayek who first articulated the concept, and exploring the connection between his articulation and three noteworthy, but very different, versions of intertemporal equilibrium: (1) an equilibrium of plans, prices, and expectations, (2) temporary equilibrium, and (3) rational-expectations equilibrium.

But before discussing these three versions of intertemporal equilibrium, I summarize in section two Hayek’s seminal 1937 contribution clarifying the necessary conditions for the existence of an intertemporal equilibrium. Then, in section three, I elaborate on an important, and often neglected, distinction, first stated and clarified by Hayek in his 1937 paper, between perfect foresight and what I call contingently correct foresight. That distinction is essential for an understanding of the distinction between the canonical Arrow-Debreu-McKenzie (ADM) model of general equilibrium, and Roy Radner’s 1972 generalization of that model as an equilibrium of plans, prices and price expectations, which I describe in section four.

Radner’s important generalization of the ADM model captured the spirit and formalized Hayek’s insights about the nature and empirical relevance of intertemporal equilibrium. But to be able to prove the existence of an equilibrium of plans, prices and price expectations, Radner had to make assumptions about agents that Hayek, in his philosophically parsimonious view of human knowledge and reason, had been unwilling to accept. In section five, I explore how J. R. Hicks’s concept of temporary equilibrium, clearly inspired by Hayek, though credited by Hicks to Erik Lindahl, provides an important bridge connecting the pure hypothetical equilibrium of correct expectations and perfect consistency of plans with the messy real world in which expectations are inevitably disappointed and plans routinely – and sometimes radically – revised. The advantage of the temporary-equilibrium framework is to provide the conceptual tools with which to understand how financial crises can occur and how such crises can be propagated and transformed into economic depressions, thereby making possible the kind of business-cycle model that Hayek tried unsuccessfully to create. But just as Hicks unaccountably failed to credit Hayek for the insights that inspired his temporary-equilibrium approach, Hayek failed to see the potential of temporary equilibrium as a modeling strategy that combines the theoretical discipline of the equilibrium method with the reality of expectational inconsistency across individual agents.

In section six, I discuss the Lucasian idea of rational expectations in macroeconomic models, mainly to point out that, in many ways, it simply assumes away the problem of plan expectational consistency with which Hayek, Hicks and Radner and others who developed the idea of intertemporal equilibrium were so profoundly concerned.

The Phillips Curve and the Lucas Critique

With unemployment at the lowest levels since the start of the millennium (initial unemployment claims in February were the lowest since 1973!), lots of people are starting to wonder if we might be headed for a pick-up in the rate of inflation, which has been averaging well under 2% a year since the financial crisis of September 2008 ushered in the Little Depression of 2008-09 and beyond. The Fed has already signaled its intention to continue raising interest rates even though inflation remains well anchored at rates below the Fed’s 2% target. And among Fed watchers and Fed cognoscenti, the only question being asked is not whether the Fed will raise its Fed Funds rate target, but how frequent those (presumably) quarter-point increments will be.

The prevailing view seems to be that the thought process of the Federal Open Market Committee (FOMC) in raising interest rates — even before there is any real evidence of an increase in an inflation rate that is still below the Fed’s 2% target — is that a preemptive strike is required to prevent inflation from accelerating and rising above what has become an inflation ceiling — not an inflation target — of 2%.

Why does the Fed believe that inflation is going to rise? That’s what the econoblogosphere has, of late, been trying to figure out. And the consensus seems to be that the FOMC is basing its assessment that the risk that inflation will break the 2% ceiling that it has implicitly adopted has become unacceptably high. That risk assessment is based on some sort of analysis in which it is inferred from the Phillips Curve that, with unemployment nearing historically low levels, rising inflation has become dangerously likely. And so the next question is: why is the FOMC fretting about the Phillips Curve?

In a blog post earlier this week, David Andolfatto of the St. Louis Federal Reserve Bank, tried to spell out in some detail the kind of reasoning that lay behind the FOMC decision to actively tighten the stance of monetary policy to avoid any increase in inflation. At the same time, Andolfatto expressed his own view, that the rate of inflation is not determined by the rate of unemployment, but by the stance of monetary policy.

Andolfatto’s avowal of monetarist faith in the purely monetary forces that govern the rate of inflation elicited a rejoinder from Paul Krugman expressing considerable annoyance at Andolfatto’s monetarism.

Here are three questions about inflation, unemployment, and Fed policy. Some people may imagine that they’re the same question, but they definitely aren’t:

  1. Does the Fed know how low the unemployment rate can go?
  2. Should the Fed be tightening now, even though inflation is still low?
  3. Is there any relationship between unemployment and inflation?

It seems obvious to me that the answer to (1) is no. We’re currently well above historical estimates of full employment, and inflation remains subdued. Could unemployment fall to 3.5% without accelerating inflation? Honestly, we don’t know.

Agreed.

I would also argue that the Fed is making a mistake by tightening now, for several reasons. One is that we really don’t know how low U can go, and won’t find out if we don’t give it a chance. Another is that the costs of getting it wrong are asymmetric: waiting too long to tighten might be awkward, but tightening too soon increases the risks of falling back into a liquidity trap. Finally, there are very good reasons to believe that the Fed’s 2 percent inflation target is too low; certainly the belief that it was high enough to make the zero lower bound irrelevant has been massively falsified by experience.

Agreed, but the better approach would be to target the price level, or even better nominal GDP, so that short-term undershooting of the inflation target would provide increased leeway to allow inflation to overshoot the inflation target without undermining the credibility of the commitment to price stability.

But should we drop the whole notion that unemployment has anything to do with inflation? Via FTAlphaville, I see that David Andolfatto is at it again, asserting that there’s something weird about asserting an unemployment-inflation link, and that inflation is driven by an imbalance between money supply and money demand.

But one can fully accept that inflation is driven by an excess supply of money without denying that there is a link between inflation and unemployment. In the normal course of events an excess supply of money may lead to increased spending as people attempt to exchange their excess cash balances for real goods and services. The increased spending can induce additional output and additional employment along with rising prices. The reverse happens when there is an excess demand for cash balances and people attempt to build up their cash holdings by cutting back their spending, reducing output. So the inflation unemployment relationship results from the effects induced by a particular causal circumstance. Nor does that mean that an imbalance in the supply of money is the only cause of inflation or price level changes.

Inflation can also result from nothing more than the anticipation of inflation. Expected inflation can also affect output and employment, so inflation and unemployment are related not only by both being affected by excess supply of (demand for) money, but by both being affect by expected inflation.

Even if you think that inflation is fundamentally a monetary phenomenon (which you shouldn’t, as I’ll explain in a minute), wage- and price-setters don’t care about money demand; they care about their own ability or lack thereof to charge more, which has to – has to – involve the amount of slack in the economy. As Karl Smith pointed out a decade ago, the doctrine of immaculate inflation, in which money translates directly into inflation – a doctrine that was invoked to predict inflationary consequences from Fed easing despite a depressed economy – makes no sense.

There’s no reason for anyone to care about overall money demand in this scenario. Price setters respond to the perceived change in the rate of spending induced by an excess supply of money. (I note parenthetically, that I am referring now to an excess supply of base money, not to an excess supply of bank-created money, which, unlike base money, is not a hot potato that cannot be withdrawn from circulation in response to market incentives.) Now some price setters may actually use macroeconomic information to forecast price movements, but recognizing that channel would take us into the realm of an expectations-theory of inflation, not the strict monetary theory of inflation that Krugman is criticizing.

And the claim that there’s weak or no evidence of a link between unemployment and inflation is sustainable only if you insist on restricting yourself to recent U.S. data. Take a longer and broader view, and the evidence is obvious.

Consider, for example, the case of Spain. Inflation in Spain is definitely not driven by monetary factors, since Spain hasn’t even had its own money since it joined the euro. Nonetheless, there have been big moves in both Spanish inflation and Spanish unemployment:

That period of low unemployment, by Spanish standards, was the result of huge inflows of capital, fueling a real estate bubble. Then came the sudden stop after the Greek crisis, which sent unemployment soaring.

Meanwhile, the pre-crisis era was marked by relatively high inflation, well above the euro-area average; the post-crisis era by near-zero inflation, below the rest of the euro area, allowing Spain to achieve (at immense cost) an “internal devaluation” that has driven an export-led recovery.

So, do you really want to claim that the swings in inflation had nothing to do with the swings in unemployment? Really, really?

No one claims – at least no one who believes in a monetary theory of inflation — should claim that swings in inflation and unemployment are unrelated, but to acknowledge the relationship between inflation and unemployment does not entail acceptance of the proposition that unemployment is a causal determinant of inflation.

But if you concede that unemployment had a lot to do with Spanish inflation and disinflation, you’ve already conceded the basic logic of the Phillips curve. You may say, with considerable justification, that U.S. data are too noisy to have any confidence in particular estimates of that curve. But denying that it makes sense to talk about unemployment driving inflation is foolish.

No it’s not foolish, because the relationship between inflation and unemployment is not a causal relationship; it’s a coincidental relationship. The level of employment depends on many things and some of the things that employment depends on also affect inflation. That doesn’t mean that employment causally affects inflation.

When I read Krugman’s post and the Andalfatto post that provoked Krugman, it occurred to me that the way to summarize all of this is to say that unemployment and inflation are determined by a variety of deep structural (causal) relationships. The Phillips Curve, although it was once fashionable to refer to it as the missing equation in the Keynesian model, is not a structural relationship; it is a reduced form. The negative relationship between unemployment and inflation that is found by empirical studies does not tell us that high unemployment reduces inflation, any more than a positive empirical relationship between the price of a commodity and the quantity sold would tell you that the demand curve for that product is positively sloped.

It may be interesting to know that there is a negative empirical relationship between inflation and unemployment, but we can’t rely on that relationship in making macroeconomic policy. I am not a big admirer of the Lucas Critique for reasons that I have discussed in other posts (e.g., here and here). But, the Lucas Critique, a rather trivial result that was widely understood even before Lucas took ownership of the idea, does at least warn us not to confuse a reduced form with a causal relationship.

What Hath Merkel Wrought?

In my fifth month of blogging in November 2011, I wrote a post which I called “The Economic Consequences of Mrs. Merkel.” The title, as I explained, was inspired by J. M. Keynes’s famous essay “The Economic Consequences of Mr. Churchill,” which eloquently warned that Britain was courting disaster by restoring the convertibility of sterling into gold at the prewar parity of $4.86 to the pound, the dollar then being the only major currency convertible into gold. The title of Keynes’s essay, in turn, had been inspired by Keynes’s celebrated book The Economic Consequences of the Peace about the disastrous Treaty of Versailles, which accurately foretold the futility of imposing punishing war reparations on Germany.

In his essay, Keynes warned that by restoring the prewar parity, Churchill would force Britain into an untenable deflation at a time when more than 10% of the British labor force was unemployed (i.e., looking for, but unable to find, a job at prevailing wages). Keynes argued that the deflation necessitated by restoration of the prewar parity would impose an intolerable burden of continued and increased unemployment on British workers.

But, as it turned out, Churchill’s decision turned out to be less disastrous than Keynes had feared. The resulting deflation was quite mild, wages in nominal terms were roughly stable, and real output and employment grew steadily with unemployment gradually falling under 10% by 1928. The deflationary shock that Keynes had warned against turned out to be less severe than Keynes had feared because the U.S. Federal Reserve, under the leadership of Benjamin Strong, President of the New York Fed, the de facto monetary authority of the US and the world, followed a policy that allowed a slight increase in the world price level in terms of dollars, thereby moderating the deflationary effect on Britain of restoring the prewar sterling/dollar exchange rate.

Thanks to Strong’s enlightened policy, the world economy continued to expand through 1928. I won’t discuss the sequence of events in 1928 and 1929 that led to the 1929 stock market crash, but those events had little, if anything, to do with Churchill’s 1925 decision. I’ve discussed the causes of the 1929 crash and the Great Depression in many other places including my 2011 post about Mrs. Merkel, so I will skip the 1929 story in this post.

The point that I want to make is that even though Keynes’s criticism of Churchill’s decision to restore the prewar dollar/sterling parity was well-taken, the dire consequences that Keynes foretold, although they did arrive a few years thereafter, were not actually caused by Churchill’s decision, but by decisions made in Paris and New York, over which Britain may have had some influence, but little, if any, control.

What I want to discuss in this post is how my warnings about potential disaster almost six and a half years ago have turned out. Here’s how I described the situation in November 2011:

Fast forward some four score years to today’s tragic re-enactment of the deflationary dynamics that nearly destroyed European civilization in the 1930s. But what a role reversal! In 1930 it was Germany that was desperately seeking to avoid defaulting on its obligations by engaging in round after round of futile austerity measures and deflationary wage cuts, causing the collapse of one major European financial institution after another in the annus horribilis of 1931, finally (at least a year after too late) forcing Britain off the gold standard in September 1931. Eighty years ago it was France, accumulating huge quantities of gold, in Midas-like self-satisfaction despite the economic wreckage it was inflicting on the rest of Europe and ultimately itself, whose monetary policy was decisive for the international value of gold and the downward course of the international economy. Now, it is Germany, the economic powerhouse of Europe dominating the European Central Bank, which effectively controls the value of the euro. And just as deflation under the gold standard made it impossible for Germany (and its state and local governments) not to default on its obligations in 1931, the policy of the European Central Bank, self-righteously dictated by Germany, has made default by Greece and now Italy and at least three other members of the Eurozone inevitable. . . .

If the European central bank does not soon – and I mean really soon – grasp that there is no exit from the debt crisis without a reversal of monetary policy sufficient to enable nominal incomes in all the economies in the Eurozone to grow more rapidly than does their indebtedness, the downward spiral will overtake even the stronger European economies. (I pointed out three months ago that the European crisis is a NGDP crisis not a debt crisis.) As the weakest countries choose to ditch the euro and revert back to their own national currencies, the euro is likely to start to appreciate as it comes to resemble ever more closely the old deutschmark. At some point the deflationary pressures of a rising euro will cause even the Germans, like the French in 1935, to relent. But one shudders at the economic damage that will be inflicted until the Germans come to their senses. Only then will we be able to assess the full economic consequences of Mrs. Merkel.

Greece did default, but the European Community succeeded in imposing draconian austerity measures on Greece, while Italy, Spain, France, and Portugal, which had all been in some danger, managed to avoid default. That they did so is due first to the enormous cost that would have be borne by a country in the Eurozone to extricate itself from the Eurozone and reinstitute its own national currency and second to the actions taken by Mario Draghi, who succeeded Jean Claude Trichet as President of the European Central Bank in November 2011. If monetary secession from the eurozone were less fraught, surely Greece and perhaps other countries would have chosen that course rather than absorb the continuing pain of remaining in the eurozone.

But if it were not for a decisive change in policy by Draghi, Greece and perhaps other countries would have been compelled to follow that uncharted and potentially catastrophic path. But, after assuming leadership of the ECB, Draghi immediately reversed the perverse interest-rate hikes imposed by his predecessor and, even more crucially, announced in July 2012 that the ECB “is ready to do whatever it takes to preserve the Euro. And believe me, it will be enough.” Draghi’s reassurance that monetary easing would be sufficient to avoid default calmed markets, alleviated market pressure driving up interest rates on debt issued by those countries.

But although Draghi’s courageous actions to ease monetary policy in the face of German disapproval avoided a complete collapse, the damage inflicted by Mrs. Merkel’s ferocious anti-inflation policy did irreparable damage, not only on Greece, but, by deepening the European downturn and delaying and suppressing the recovery, on the rest of the European community, inflaming anti-EU, populist nationalism in much of Europe that helped fuel the campaign for Brexit in the UK and has inspired similar anti-EU movements elsewhere in Europe and almost prevented Mrs. Merkel from forming a government after the election a few months ago.

Mrs. Merkel is perhaps the most impressive political leader of our time, and her willingness to follow a humanitarian policy toward refugees fleeing the horrors of war and persecution showed an extraordinary degree of political courage and personal decency that ought to serve as a model for other politicians to emulate. But that admirable legacy will be forever tarnished by the damage she inflicted on her own country and the rest of the EU by her misguided battle against the phantom threat of inflation.

What Do Stock Prices Tell Us about the Economy?

Stock prices (as measured by the S&P 500) in 2017 rose by over 20%, an impressive amount, and what is most impressive about it is perhaps that this rise prices came after eight previous years of steady increases.

Here are the annual year-on-year and cumulative changes in the S&P500 since 2009.

2009              21.1%              21.1%*

2010              12.0%              33.1%*

2011              -0.0%               33.1%*

2012              12.2%               45.3%*

2013              25.9%               71.2%*

2014              10.8%               82.0%*

2015              -0.7%                81.3%*

2016             9.1%                   90.4%*            (4.5%)**          (85.9%)***

2017             17.7%                 108.1%*          (22.3%)****

2018 (YTD)    2.0%                  110.1%*           (24.3%)****

* cumulative increase since the end of 2008

** increase from end of 2015 to November 8, 2016

*** cumulative increase from end of 2008 to November 8, 2016

**** cumulative increase since November 8, 2016

So, from the end of 2008 until the start of 2017, approximately coinciding with Obama’s two terms as President, the S&P 500 rose in every year except 2011 and 2015, when the index was essentially unchanged, and rose by more than 10% in five of the eight years (twice by more than 20%), with stock prices nearly doubling during the Obama Presidency.

But what does the doubling of stock prices under Obama really tell us about the well-being of the American economy, and, even more importantly, about the well-being of the American public during those years? Is there any correlation between the performance of the stock market and the well-being of actual people? Does the doubling of stock prices under Obama mean that most Americans were better off at the end of his Presidency than they were at the start of it?

My answer to these questions is a definite — though not very resounding — yes, because we know that the US economy at the end of 2008 was in the middle of the sharpest downturn since the Great Depression. Output was contracting, employment was falling, and the financial system was on the verge of collapse, with stock prices down almost 50% from where they had been at the end of August, and nearly 60% from the previous all-time high reached in 2007. In 2016, after seven years of slow but steady growth, employment and output had recovered and surpassed their previous peaks, though only by small amounts. But the recovery, although disappointingly slow, was real.

That improvement was reflected, albeit with a lag, in changes in median household and median personal income between 2008 and 2016.

2009                    -0.7%                   -0.7%

2010                    -2.6%                    -3.3%

2011                     -1.6%                    -4.9%

2012                    -0.1%                    -5.0%

2013                      3.5%                    -1.5%

2014                    -1.5%                     -3.0%

2015                     5.1%                       2.0%

2016                      3.1%                      5.1%

But it’s also striking how weak the correlation was between rapidly rising stock prices and rising median incomes in the Obama years. Given a tepid real recovery from the Little Depression, what accounts for the associated roaring recovery in stock prices? Well, for one thing, much of the improvement in the stock market was simply recovering losses in stock valuations during the downturn. Stock prices having fallen further than incomes in the Little Depression, it’s not surprising that the recovery in stocks was steeper than the recovery in incomes. It took four years for the S&P 500 to reach its pre-Depression peak, so, normalized to their pre-Depression peaks, the correlation between stock prices and median incomes is not as weak as it seems when comparing year-on-year percentage changes.

But considering the improvement in stock prices under Obama in historical context also makes the improvement in stock prices under Obama seem less remarkable than it does when viewed without taking the previous downturn into account. Stock prices simply returned (more or less) to the path that, one might have expected them to follow by extrapolating their past performance. Nevertheless, even if we take into account that, during the Little Depression, stocks prices fell more sharply than real incomes, stocks have clearly outperformed the real economy during the recovery, real output and income having failed to return to the growth path that it had been tracking before the 2008 downturn.

Why have stocks outperformed the real economy? The answer to that question is a straightforward application of the basic theory of asset valuation, according to which the value of real assets – machines, buildings, land — and financial assets — stocks and bonds — reflects the discounted expected future income streams associated with those assets. In particular, stock prices represent the discounted present value of the expected future cash flows (dividends or stock buy-backs) from firms to their shareholders. So, if the economy has “recovered” (more or less) from the 2008-09 downturn, the expected future cash flows from firms have presumably – and on average — surpassed the cash flows that had been expected before the downturn.

But the weakness in the recovery suggests that the increase in expected cash flows can’t fully account for the increase in stock prices. Why did stock prices rise by more than the likely increase in expected cash flows? The basic theory of asset valuation tells us that the remainder of the increase in stock prices can be attributed to the decline of real interest rates since the 2008 downturn to historically low levels.

Of course, to say that the increase in stock prices is attributable to the decline in real interest rates just raises a different question: what accounts for the decline in real interest rates? The answer, derived from Irving Fisher, is basically that if perceived opportunities for future investment and growth are diminished, the willingness of people to trade future for present income also tends to diminish. What the rate of interest represents in the Fisherian framework is the rate at which people are willing to trade future for present income – i.e., the premium (discount) that is placed on present (future) income.

The Fisherian view is totally at odds with the view that the real interest rate is – or can be — controlled by the monetary authority. According to the latter view, the reason that real interest rates since the 2008 downturn have been at historically low levels is that the Federal Reserve has forced interest rates down to those levels by flooding the economy with huge quantities of printed money. There is a certain sense in which that view has a small element of truth: had the Fed adopted a different set of policy goals concerning inflation and nominal GDP, real interest rates might have risen to more “normal” levels. But given the overall policy framework within which it was operating, the Fed had only minimal control over the real rate of interest.

The essential idea is that in the Fisherian view the real rate of interest is not a single price determined in a single market; it is a distillation of the entire intertemporal structure of price relationships simultaneously determined in the myriad of individual markets in which transactions for present and future delivery are continuously being agreed upon. To imagine that the Fed, or any monetary authority, could control or even modestly influence this almost incomprehensibly complicated structure of price relationships according to its wishes is simply delusional.

If the decline in real interest rates after the 2008 downturn reflected generally reduced optimism about future economic growth, then the increase in stock prices actually reflected declining optimism by most people about their future well-being compared to their pre-downturn expectations. That loss of optimism might have been, at least in part, self-fulfilling insofar as it discouraged potentially worthwhile – i.e., profitable — investments that would have been undertaken had expectations been more optimistic.

Nevertheless, the near doubling of stock prices during the Obama administration did coincide with a not insignificant improvement in the well-being of most Americans. Most Americans were substantially better off at the end of 2016, after about seven years of slow but steady economic growth, than they were at the end of 2008 when total output and employment were contracting at the fastest rate since the Great Depression. But to use the increase in stock prices as a quantitative measure of the improvement in their well-being would be misleading.

I would also mention as an aside that a favorite faux-populist talking point of Obama and Fed critics used to be that rising stock prices during the Obama years revealed the bias of the elitist Fed Governors appointed by Obama in favor of the wealthy owners of corporate stock, and their callous disregard of the small savers who leave their retirement funds in bank savings accounts earning minimal interest and of workers whose wage increases barely kept up with inflation. But this refrain of critics – and I am thinking especially of the Wall Street Journal editorial page – who excoriated the Obama administration and the Fed for trying to raise stock prices by keeping interest rates at abnormally low levels now unblushingly celebrate record-high stock prices as proof that tax cuts mostly benefiting corporations and their stockholders signal the start of a new golden age of accelerating growth.

So the next question to consider is what can we infer about the well-being of Americans and the American economy from the increase in stock prices since November 8, 2016? For purposes of this mental exercise, let me stipulate that the rise in stock prices since the moment when it became clear who had been elected President by the voters on November 8, 2016 was attributable to the policies that the new administration was expected to adopt.

Because interest rates have risen along with stock prices since November 8, 2016, increased stock prices must reflect investors’ growing optimism about the future cash flows to be distributed by corporations to shareholders. So, our question can be restated as follows: which policies — actual or expected — of the new administration could account for the growing optimism of investors since the election? Here are five policy categories to consider: (1) regulation, (2) taxes, (3) international trade, (4) foreign affairs, (5) macroeconomic and monetary policies.

The negative reaction of stock prices to the announcement last week that tariffs will be imposed on steel and aluminum imports suggests that hopes for protectionist trade policies were not the main cause of rising investor optimism since November 2016. And presumably investor hopes for rising corporate cash flows to shareholders were not buoyed up by increasing tensions on the Korean peninsula and various belligerent statements by Administration officials about possible military responses to North Korean provocations.

Macroeconomic and monetary policies being primarily the responsibility of the Federal Reserve, the most important macroeconomic decision made by the new Administration to date was appointing Jay Powell to succeed Janet Yellen as Fed Chair. But this appointment was seen as a decision to keep Fed monetary policy more or less unchanged from what it was under Yellen, so one could hardly ascribe increased investor optimism to a decision not to change the macroeconomic and monetary policies that had been in place for at least the previous four years.

That leaves us with anticipated or actual changes in regulatory and tax policies as reasons for increased optimism about future cash flows from corporations to their shareholders. The two relevant questions to ask about anticipated or actual changes in regulatory and tax policies are: (1) could such changes have raised investor optimism, thereby raising stock prices, and (2), if so, would rising stock prices reflect enhanced well-being on the part of the American economy and the American people?

Briefly, the main idea for regulatory reform that the Administration wants to pursue is to require that whenever an agency adopts a new regulation, it should simultaneously eliminate two old ones. Supposedly such a requirement – sometimes called a regulatory budget – is to limit the total amount of regulation that the government can impose on the economy, the theory being that new regulations would not be adopted unless they were likely to be really effective.

But agencies are already required to show that regulations pass some cost-benefit test before imposing new regulations. So it’s not clear that the economy would be better off if new regulations, which can now be adopted only if they are expected to generate benefits exceeding the costs associated with their adoption, cannot be adopted unless two other regulations are eliminated. Presumably, underlying the new regulatory approach is a theory of bureaucratic behavior positing that the benefits of new regulations are systematically overestimated and their costs systematically underestimated by bureaucrats.

I’m not going to argue the merits of the underlying theory, but obviously it is possible that the new regulatory approach would result in increased profits for businesses that will have fewer regulatory burdens imposed upon them, thereby increasing the value of ownership shares in those firms. So, it’s possible that the new regulatory approach adopted by the Administration is causing stock prices to rise, presumably by more than they would have risen under the old simple cost-benefit regulatory approach that was followed by the Obama Administration.

But even if the new regulatory approach has caused stock prices to rise, it’s not clear that increasing stock valuations represent a net increase in the well-being of the American economy and the American people. If regulations that are costly to the economy in general are eliminated, the benefits of fewer regulations would accrue not just to the businesses whose profits rise as a result; eliminating inefficient regulations would also benefit the rest of the economy by freeing up resources to produce goods and services whose value to consumers would the benefits foregone when regulations were eliminated. But it’s also possible, that regulations are providing benefits greater than the costs of implementing and enforcing them.

If eliminating regulations leads to increased pollution or sickness or consumer fraud, and the value of those foregone benefits exceeds the costs of those regulations, it will not be corporations and their shareholders that suffer; it will be the general public that will bear the burden of their elimination. While corporations increase the cash flows paid to shareholders, members of the public will suffer more-than-offsetting reductions in well-being by being exposed to increased pollution, suffering increased illness and injury, or suffering added fraud and other consumer harms.

Since 1970, when the federal government took serious measures to limit air and water pollution, air and water quality have improved greatly in most of the US. Those improvements, for the most part, have probably not been reflected in stock prices, because environmental improvements, mostly affecting common-property resources, can’t be easily capitalized, though, some of those improvements have likely been reflected in increasing land values in cities and neighborhoods where air and water quality have improved. Foregoing pollution-reducing regulations might actually have led to increased stock prices for many corporations burdened by those regulations, but the US as a whole, and its inhabitants, would not have been better off without those regulations than they are with them.

So, rising stock prices are not necessarily a good indicator of whether the new regulatory approach of the Administration is benefiting or harming the American economy and the American public. Market valuations convey a lot of important information, but there is also a lot of important information that is not conveyed in stock prices.

As for taxes, it is straightforward that reducing corporate-tax liability increases funds available to be paid directly to shareholders as dividends and share buy-backs, or indirectly through investments expected to increase cash flows to shareholders in the more distant future. Does an increase in stock prices caused by a reduction in corporate-tax liability imply any enhancement in the well-being of the American economy and the American people

The answer, as a first approximation, is no. A reduction in corporate tax liability implies a reduction in the tax liability of shareholders, and that reduction is immediately capitalized into the value of shares. Increased stock prices simply reflect the expected reduction in shareholder tax liability.

Of course, reducing the tax burden on shareholders may improve economic performance, causing an increase in corporate cash flows to shareholders exceeding the reduction in shareholder tax liabilities. But it is unlikely that the difference between the increase in cash flows to shareholders and the reduction in shareholder tax liabilities would be more than a few percent of the total reduction in corporate tax liability, so that any increase in economic performance resulting from a reduction in corporate tax liability would account for only a small fraction of the increase in stock prices.

The good thing about the corporate-income tax is that it is so easy to collect, and that it is so hard to tell who really bears the tax burden: shareholders, workers or consumers. That’s why governments like taxing corporations. But the really bad thing about the corporate-income tax is that it is so hard to tell who really bears the burden of the corporate tax, shareholders, workers or consumers.

Because it is so hard to tell who bears the burden of the tax, people just think that “corporations” pay the tax, but “corporations” aren’t people, and they don’t really pay taxes, they are just the conduit for a lot of unidentified people to pay unknown amounts of tax. As Adam Winkler has just explained in this article and in an important new book, It is a travesty that the Supreme Court was hoodwinked in the latter part of the nineteenth century into accepting the notion that corporations are Constitutional persons with essentially the same rights as actual persons – indeed, with far greater rights than human beings belonging to disfavored racial or ethnic categories.

As I wrote years ago in one of my early posts on this blog, there are some very good arguments for abolishing the corporate income tax altogether, as Hyman Minsky argued. Forcing corporations to distribute their profits to shareholders would diminish the incentives for corporate empire building, thereby making venture capital more available to start-ups and small businesses. Such a reform might turn out to be an important democratizing and decentralizing change in the way that modern capitalism operates. But even if that were so, it would not mean that the effects of a reduction in the corporate tax rate could be properly measured by looking that resulting change in corporate stock prices.

Before closing this excessively long post, I will just remark that although I have been using the basic theory of asset pricing that underlies the efficient market hypothesis (EMH), adopting that theory of asset pricing does not imply that I accept the EMH. What separates me from the EMH is the assumption that there is a single unique equilibrium toward which the economy is tending at any moment in time, and that the expectations of market participants are unbiased and efficient estimates of the equilibrium price vector toward which the price system is moving. I reject all of those assumptions about the existence and uniqueness of an equilibrium price vector. If there is no equilibrium price vector toward which the economy is tending, the idea that expectations are governed by some objective equilibrium which is already there to be discovered is erroneous; expectations create their own reality and equilibrium is itself determined by expectations. When the existence of equilibrium depends on expectations, it becomes impossible to assign any meaning to the term “efficient market.”

Milton Friedman’s Rabble-Rousing Case for Abolishing the Fed

I recently came across this excerpt from a longer interview of Milton Friedman conducted by Brian Lamb on Cspan in 1994. In this excerpt Lamb asks Friedman what he thinks of the Fed, and Friedman, barely able to contain his ideological fervor, quickly rattles off his version of the history of the Fed, blaming the Fed, at least by implication, for all the bad monetary and macroeconomic events that happened between 1914, when the Fed came into existence, and the1970s.

Here’s a rough summary of Friedman’s tirade:

I have long been in favor of abolishing [the Fed]. There is no institution in the United States that has such a high public standing and such a poor record of performance. . . . The Federal Reserve began operations in 1914 and presided over a doubling of prices during World War I. It produced a major collapse in 1921. It had a good period from about 1922 to 1928. It took actions in 1928 and 1929 that led to a major recession in 1929 and 1930, and it converted that recession by its actions into the Great Depression. The major villain in the Great Depression in my opinion was unquestionably the Federal Reserve System. Since that time, it presided over a doubling of price in World War II. It financed the inflation of the 1970s. On the whole it has a very poor record. It’s done far more harm than good.

Let’s go through Friedman’s complaints one at a time.

World War I inflation.

Friedman blames World War I inflation on the Fed. Friedman, as I have shown in many previous posts, had a very shaky understanding of how the gold standard worked. His remark about the Fed’s “presiding over a doubling of prices” during World War I is likely yet another example of Friedman’s incomprehension, though his use of the weasel words “presided over” rather than the straightforward “caused” does suggest that Friedman was merely trying to insinuate that the Fed was blameworthy when he actually understood that the Fed had almost no control over inflation in World War I, the US remaining formally on the gold standard until April 6, 1917, when the US declared war on Germany and entered World War I, formally suspending the convertibility of the dollar into gold.

As long as the US remained on a gold standard, the value of the dollar was determined by the value of gold. The US was importing lots of gold during the first two and a half years of the World War I as the belligerents used their gold reserves and demonetized their gold coins to finance imports of war material from the US. The massive demonetization of gold caused gold to depreciate on world markets. Another neutral country, Sweden, actually left the gold standard during World War I to avoid the inevitable inflation associated with the wartime depreciation of gold. So it was either ignorant or disingenuous for Friedman to attribute the World War I inflation to the actions of the Federal Reserve. No country could have remained on the gold standard during World War I without accepting inflation, and the Federal Reserve had no legal authority to abrogate or suspend the legal convertibility of the dollar into a fixed weight of gold.

The Post-War Collapse of 1921

Friedman correctly blames the 1921 collapse to the Fed. However, after a rapid wartime and postwar inflation, the US was trying to recreate a gold standard while holding 40% of the world’s gold reserves. The Fed therefore took steps to stabilize the value of gold, which meant raising interest rates, thereby inducing a further inflow of gold into the US to stop the real value of gold from falling in international markets. The problem was that the Fed went overboard, causing a really, and probably unnecessarily, steep deflation.

The Great Depression

Friedman is right that the Fed helped cause the Great Depression by its actions in 1928 and 1929, raising interest rates to try to quell rapidly rising stock prices. But the concerns about rising stock-market prices were probably misplaced, and the Fed’s raising of interest rates caused an inflow of gold into the US just when a gold outflow from the US was needed to accommodate the rising demand for gold on the part of the Bank of France and other central banks rejoining the gold standard and accumulating gold reserves. It was the sudden tightening of the world gold market, with the US and France and other countries rejoining the gold standard simultaneously trying to increase their gold holdings, that caused the value of gold to rise (and nominal prices to fall) in 1929 starting the Great Depression. Friedman totally ignored the international context in which the Fed was operating, failing to see that the US price level under the newly established gold standard, being determined by the international value of gold, was beyond the control of the Fed.

World War II Inflation

As with World War I, Friedman blamed the Fed for “presiding over” a doubling of prices in World War II. But unlike World War I, when rising US prices reflected a falling real value of gold caused by events outside the US and beyond the control of the Fed, in World War II rising US prices reflected the falling value of an inconvertible US dollar caused by Fed “money printing” at the behest of the President and the Treasury. But why did Friedman consider Fed money printing in World War II to have been a blameworthy act on the part of the Fed? The US was then engaged in a total war against the Axis powers. Under those circumstances, was the primary duty of the Fed to keep prices stable or to use its control over “printing press” to ensure that the US government had sufficient funds to win the war against Nazi totalitarianism and allied fascist forces, thereby preserving American liberties and values even more fundamental than keeping inflation low and enabling creditors to extract what was owed to them by their debtors in dollars of undiminished real purchasing power.

Now it’s true that many of Friedman’s libertarian allies were appalled by US participation in World War II, but Friedman, to his credit, did not share their disapproval of US participation in World War II. But, given his support for World War II, Friedman should have at least acknowledged the obvious role of inflationary finance in emergency war financing, a role which, as Earl Thompson and I and others have argued, rationalizes the historic legal monopoly on money printing maintained by almost all sovereign states. To condemn the Fed for inflationary policies during World War II without recognizing the critical role of the “printing press” in war finance was a remarkably uninformed and biased judgment on Friedman’s part.

1970s Inflation

The Fed certainly had a major role in inflation during the 1970s, which as early as 1966 was already starting to creep up from 1-2% rates that had prevailed from 1953 to 1965. The rise in inflation was again triggered by war-related expenditures, owing to the growing combat role of the US in Vietnam starting in 1965. The Fed’s role in rising inflation in the late 1960s and early 1970s was hardly the Fed’s finest hour, but again, it is unrealistic to expect a public institution like the Fed to withhold the financing necessary to support a military action undertaken by the national government. Certainly, the role of Arthur Burns, appointed by Nixon in 1970 to become Fed Chairman in encouraging Nixon to impose wage-and-price controls as an anti-inflationary measure was one of the most disreputable chapters in the Fed’s history, and the cluelessness of Carter’s first Fed Chairman, G. William Miller, appointed to succeed Burns, is almost legendary, but given the huge oil-price increases of 1973-74 and 1978-79, a policy of accommodating those supply-side shocks by allowing a temporary increase in inflation was probably optimal. So, given the difficult circumstances under which the Fed was operating, the increased inflation of the 1970s was not entirely undesirable.

But although Friedman was often sensitive to the subtleties and nuances of policy making when rendering scholarly historical and empirical judgments, he rarely allowed subtleties and nuances to encroach on his denunciations when he was operating in full rabble-rousing mode.

Pedantry and Mastery in Following Rules

From George Polya’s classic How to Solve It (p. 148).

To apply a rule to the letter, rigidly, unquestioningly, in cases where it fits and cases where it does not fit, is pedantry. Some pedants are poor fools; they never did understand the rule which they apply so conscientiously and so indiscriminately. Some pedants are quite successful; they understood their rule, at least in the beginning (before they became pedants), and chose a good one that fits in many cases and fails only occasionally.

To apply a rule with natural ease, with judgment, noticing the cases where it fits, and without ever letting the words of the rule obscure the purpose of the action or the opportunities of the situation, is mastery.

Polya, of course, was distinguishing between pedantry and mastery in applying rules for problem solving, but his distinction can be applied more generally: a distinction between following rules using judgment (aka discretion) and following rules mechanically without exercising judgment (i.e., without using discretion). Following rules by rote need not be dangerous when circumstances are more or less those envisioned when the rules were originally articulated, but, when unforeseen circumstances arise,  making the rule unsuitable to the new circumstances, following rules mindlessly can lead to really bad outcomes.

In the real world, the rules that we live by have to be revised and reinterpreted constantly in the light of experience and of new circumstances and changing values. Rules are supposed to conform to deeper principles, but the specific rules that we try to articulate to guide our actions are in need of periodic revision and adjustment to changing circumstances.

In deciding cases, judges change the legal rules that they apply by recognizing subtle — and relevant — distinctions that need to be taken into account in rendering decisions. They do not adjust rules willfully and arbitrarily. Instead, relying on deeper principles of justice and humanity, they adjust or bend the rules to temper the injustices that would from a mechanical and unthinking application of the rules. By exercising judgment — in other words, by doing what judges are supposed to do — they uphold, rather than subvert, the rule of law in the process of modifying the existing rules. The modern fetish for depriving judges of the discretion to exercise judgment in rendering decisions is antithetical to the concept of the rule of law.

A similar fetish for rules-based monetary policy, i.e., a monetary system requiring the monetary authority to mechanically follow some numerical rule, is an equally outlandish misapplication of the idea that law is nothing more than a system of rules and that judges should do more than select the relevant rule to be applied and render a decision based on that rule without considering whether the decision is consistent with the deeper underlying principles of justice on which the legal system as a whole is based.

Because judges exercise coercive power over the lives and property of individuals, the rule of law requires their decisions to be justified in terms of the explicit rules and implicit and explicit principles of the legal system judges apply. And litigants have a right to appeal judgments rendered if they can argue that the judge misapplied the relevant legal rules. Having no coercive power over the lives or property of individuals, the monetary authority need not be bound by the kind of legal constraints to which judges are subject in rendering decisions that directly affect the lives and property of individuals.

The apotheosis of the fetish for blindly following rules in monetary policy was the ideal expressed by Henry Simons in his famous essay “Rules versus Authorities in Monetary Policy” in which he pleaded for a monetary rule that “would work mechanically, with the chips falling where they may. We need to design and establish a system good enough so that, hereafter, we may hold to it unrationally — on faith — as a religion, if you please.”

However, Simons, recovering from this momentary lapse into irrationality, quickly conceded that his plea for a monetary system good enough to be held on faith was impractical, abandoning it in favor of the more modest goal of stabilizing the price level. However, Simons’s student Milton Friedman, surpassed his teacher in pedantry, invented what came to be known as his k-percent rule, under which the Federal Reserve was to be required to make the total quantity of  money in the economy increase continuously at an annual rate of growth equal to k percent. Friedman actually believed that his rule could be implemented by a computer, so that he confidently — and foolishly — recommended abolishing the Fed.

Eventually, after erroneously forecasting the return of double-digit inflation for nearly two decades, Friedman, a fervent ideologue but also a superb empirical economist, reluctantly allowed his ideological predispositions to give way in the face of contradictory empirical evidence and abandoned his k-percent rule. That was a good, if long overdue, call on Friedman’s part, and it should serve as a lesson and a warning to advocates of imposing overly rigid rules on the monetary authorities.

Noah Smith on Bitcoins: A Failure with a Golden Future

Noah Smith and I agree that, as I argued in my previous post, Bitcoins have no chance of becoming a successful money, much less replacing or displacing the dollar as the most important and widely used money in the world. In a post on Bloomberg yesterday, Noah explains why Bitcoins are nearly useless as money, reiterating a number of the points I made and adding some others of his own. However, I think that Bitcoins must sooner or later become worthless, while Noah thinks that Bitcoins, like gold, can be a worthwhile investment for those who think that it is fiat money that is going to become worthless. Here’s how Noah puts it.

So cryptocurrencies won’t be actual currencies, except for drug dealers and other people who can’t use normal forms of payment. But will they be good financial investments? Some won’t — some will be scams, and many will simply fall into disuse and be forgotten. But some may remain good investments, and even go up in price over many decades.

A similar phenomenon has already happened: gold. Legendary investor Warren Buffett once ridiculed gold for being an unproductive asset, but the price of the yellow metal has climbed over time:

Why has gold increased in price? One reason is that it’s not quite useless — people use gold for jewelry and some industrial applications, so the metal slowly goes out of circulation, increasing its scarcity.

And another reason is that central banks now own more than 17% of all the gold in the world. In the 1980s and 1990s, when the value of gold was steadily dropping to as little as $250 an ounce, central banks were selling off their unproductive gold stocks, until they realized that, in selling off their gold stocks, they were driving down the value of all the gold sitting in their vaults. Once they figured out what they were doing, they agreed among themselves that they would start buying gold instead of selling it. And in the early years of this century, gold prices started to rebound.

But another reason is that people simply believe in gold. In the end, the price of an asset is what people believe it’s worth.

Yes, but it sure does help when there are large central banks out there buying unwanted gold, and piling it up in vaults where no one else can do anything with it.

Many people believe that fiat currencies will eventually collapse, and that gold will reemerge as the global currency.

And it’s the large central banks that issue the principal fiat currencies whose immense holdings of gold reserves that keep the price of gold from collapsing.

That narrative has survived over many decades, and the rise of Bitcoin as an alternative hasn’t killed it yet. Maybe there’s a deeply embedded collective memory of the Middle Ages, when governments around the world were so unstable that gold and other precious metals were widely used to make payments.

In the Middle Ages, the idea of, and the technology for creating, fiat money had not yet been invented, though coin debasement was widely practiced. It took centuries before a workable system for controlling fiat money was developed.

Gold bugs, as advocates of gold as an investment are commonly known, may simply be hedging against the perceived possibility that the world will enter a new medieval period.

How ill-mannered of them not to thank central banks for preventing the value of gold from collapsing.

Similarly, Bitcoin or other cryptocurrencies may never go to zero, even if no one ends up using them for anything. They represent a belief in the theory that fiat money is doomed, and a hedge against the possibility that fiat-based payments systems will one day collapse. When looking for a cryptocurrency to invest in, it might be useful to think not about which is the best payments system, but which represents the most enduring expression of skepticism about fiat money itself.

The problem with cryptocurrencies is that there is no reason to think that central banks will start amassing huge stockpiles of cryptocurrencies, thereby ensuring that the demand for cryptocurrencies will always be sufficient to keep their value at or above whatever level the central banks are comfortable with.

It just seems odd to me that some people want to invest in Bitcoins, which provide no present or future real services, and almost no present or future monetary services, in the belief that it is fiat money, which clearly does provide present and future monetary services, and provides the non-trivial additional benefit of enabling one to discharge tax liabilities to the government, is going to become worthless sometime in the future.

If your bet that Bitcoins are going to become valuable depends on the forecast that dollars will become worthless, you probably need to rethink your investment strategy.

Is “a Stable Cryptocurrency” an Oxymoron?

By way of a tweet by the indefatigable and insightful Frances Coppola, I just came upon this smackdown by Preston Byrne of the recent cryptocurrency startup called the Basecoin. I actually agree with much of Byrne’s critique, and I am on record (see several earlier blogposts such as this, this, and this) in suggesting that Bitcoins are a bubble. However, despite my deep skepticism about Bitcoins and other cryptocurrencies, I have also pointed out that, at least in theory, it’s possible to imagine a scenario in which a cryptocurrency would be viable. And because Byrne makes such a powerful (but I think overstated) case against Basecoin, I want to examine his argument a bit more carefully. But before I discuss Byrne’s blogpost, some theoretical background might be useful.

One of my first posts after launching this blog was called “The Paradox of Fiat Money” in which I posed this question: how do fiat moneys retain a positive value, when the future value of any fiat money will surely fall to zero? This question is based on the backward-induction argument that is widely used in game theory and dynamic programming. If you can figure out the end state of a process, you can reason backwards and infer the values that are optimally consistent with that end state.

If the value of money must go to zero in some future time period, and the demand for money now is derived entirely from the expectation that it will retain a positive value tomorrow, so that other people will accept from you the money that you have accepted in exchange today, then the value of the fiat money should go to zero immediately, because everyone, knowing that its future value must fall to zero, will refuse to accept between now and that future time when its value must be zero. There are ways of sidestepping the logic of backward induction, but I suggested, following a diverse group of orthodox neoclassical economists, including P. H. Wicksteed, Abba Lerner, and Earl Thompson, that the value of fiat money is derived, at least in part, from the current acceptability of fiat money in discharging tax liabilities, thereby creating an ongoing current demand for fiat money.

After I raised the problem of explaining the positive value of fiat money, I began thinking about the bitcoin phenomenon which seems to present a similar paradox, and a different approach to the problem of explaining the positive value of fiat money, and of bitcoins. The alternative approach focuses on the network externality that is associated with the demand for money; the willingness of people to hold and accept a medium of exchange increases as the number of other people that are willing to accept and hold that medium of exchange. Your demand for money increases the usefulness that money has for me. But the existence of that network externality creates a certain lock-in effect, because if you and I are potential transactors with each other, your demand for a  particular money makes it more difficult for me to switch away the medium of exchange that we are both using to another one that you are not using.  So while backward induction encourages us to switch away from the fiat money that we are both using, the network externality encourages us to keep using the fiat money that we are both using. The net effect is unclear, but it suggests that an equilibrium with a positive value for a fiat money may be unstable, creating a tipping point beyond which the demand for a fiat money, and its value, may start to fall very rapidly as people all start rushing for the exit at the same time.

So the takeaway for cryptocurrencies is that even though a cryptocurrency, offering nothing to the holder of the currency but its future resale value, is inherently worthless and therefore inherently vulnerable to a relentless and irreversible loss of value once that loss of value is generally anticipated, if the cryptocurrency can somehow attract sufficient initial acceptance as a medium of exchange, the inevitable loss of value can at least be delayed, allowing the cryptocurrency to gain acceptance, through a growing core of transactors offering and accepting it as payment. For this to happen, the cryptocurrency must provide some real advantage to its core transactors not otherwise available to them when transacting with other currencies.

The difficulty of attracting transactors who will use the cryptocurrency is greatly compounded if the value of the cryptocurrency rapidly appreciates in value. It may seem paradoxical that a rapid increase in the value of an asset – or more precisely the expectation of a rapid increase in the value of an asset – detracts from its suitability as a medium of exchange, but an expectation of rapid appreciation tends to drive any asset already being used as a currency out of circulation. That tendency is a long-and-widely recognized phenomenon, which even has both a name and a pithy epigram attached to it: “Gresham’s Law” and “bad money drives out the good.”

The phenomenon has been observed for centuries, typically occurring when two moneys with equal face value circulate concurrently, but with one money having more valuable material content than the other. For example, if a coinage consists of both full-bodied and clipped coins with equal face value, people hoard the more valuable full-bodied coins, offering only the clipped coins in exchange. Similarly, if some denominations of the same currency are gold coins and others are silver coins, so that the relative values of the coins are legally fixed, a substantial shift in the relative market values of silver and gold causes the relatively undervalued (good) coins to be hoarded, disappearing from circulation, leaving only the relatively overvalued (bad) coins in circulation. I note in passing that a fixed exchange rate between the two currencies is not, as has often been suggested, necessary for Gresham’s Law to operate when the rate of appreciation of one of the currencies is sufficiently fast.

So if I have a choice of exchanging dollars with a stable or even falling value to obtain the goods and services that I desire, why would I instead use an appreciating asset to buy those goods and services? Insofar as people are buying bitcoins now in expectation of future appreciation, they are not going be turning around to buy stuff with bitcoins when they could just as easily pay with dollars. The bitcoin bubble is therefore necessarily self-destructive. Demand is being fueled by the expectation of further appreciation, but the only service that a bitcoin offers is acceptability in exchange when making transactions — one transaction at any rate: being sold for dollars — while the expectation of appreciation is precisely what discourages people from using bitcoins to buy anything. Why then are bitcoins appreciating? That is the antimony that renders the widespread acceptance of bitcoins as a medium of exchange inconceivable.

Promoters of bitcoins extol the blockchain technology that makes trading with bitcoins anonymous and secure. My understanding of the blockchain technology is completely superficial, but there are recurring reports of hacking into bitcoin accounts and fraudulent transactions, creating doubts about the purported security and anonymity of bitcoins. Moreover, the decentralized character of bitcoin transactions slows down and increases the cost of executing a transaction with Bitcoin.

But let us stipulate for discussion purposes that Bitcoins do provide enhanced security and anonymity in performing transactions that more than compensate for the added costs of transacting with Bitcoins or other blockchain-based currencies, at least for some transactions. We all know which kinds of transactions require anonymity, and they are only a small subset of all the transactions carried out. So the number of transactions for which Bitcoins or blockchain-based cryptocurrencies might be preferred by transactors can’t be a very large fraction of the total number of transactions mediated by conventional currencies. But one could at least make a plausible argument that a niche market for a medium of exchange designed for secure anonymous transactions might be large enough to make a completely secure and anonymous medium of exchange viable. But we know that the Bitcoin will never be that alternative medium of exchange.

Understanding the fatal internal contradiction inherent in the Bitcoin, creators of cryptocurrency called Basecoin claim to have designed a cyptocurrency that will, or at any rate is supposed to, maintain a stable value even while profits accrue to investors from the anticipated increase in the demand for Basecoins. Other cryptocurrencies like Tether and Dai also purport to provide a stable value in terms of dollars, though the mechanism by which this is accomplished has not been made transparent, as promoters of Basecoins promise to do. But here’s the problem: for a new currency, whose value its promoters promise to stabilize, to generate profits to its backers from an increasing demand for that currency, the new currency units issued as demand increases must be created at a cost well below the value at which the currency is to be stabilized.

Because new Bitcoins are so costly to create, the quantity of Bitcoins can’t be increased sufficiently to prevent Bitcoins from appreciating as the demand for Bitcoins increases. The very increase in demand for Bitcoins is what renders it unsuitable to serve as a medium of exchange. So if the value of Basecoins substantially exceeds the cost of producing Basecoins, what prevents the value of Basecoins from falling to the cost of creating new Basecoins, or at least what keeps the market from anticipating that the value of Basecoins will fall to to the cost of producing new Basecoins?

To address this problem, designers of the Basecoin have created a computer protocol that is supposed to increase or decrease the quantity of Basecoins according as the value of Basecoins either exceeds, or falls short of, its target exchange value of $1 per Basecoin.  As an aside, let me just observe that even if we stipulate that the protocol would operate to stabilize the value of Basecoins at $1, there is still a problem in assuring traders that the protocol will be followed in practice. So it would seem necessary to make the protocol code publicly accessible so that potential investors backing Basecoin and holders of Basecoin could ascertain that the protocol would indeed operate as represented by Basecoin designers. So what might be needed is a WikiBasecoin.

But what I am interested in exploring here is whether the Basecoin protocol or some other similar protocol could actually work as asserted by the Basecoin White Paper. In an interesting blog post, Preston Byrne has argued that such a protocol cannot possibly work

Basecoin claims to solve the problem of wildly fluctuating cryptocurrency prices through the issuance of a cryptocurrency for which “tokens can be robustly pegged to arbitrary assets or baskets of goods while remaining completely decentralized.” This is achieved, the paper states in its abstract, by the fact that “1 Basecoin can be pegged to always trade for 1 USD. In the future, Basecoin could potentially even eclipse the dollar and be updated to a peg to the CPI or basket of goods. . . .”

Basecoin claims that it can “algorithmically adjust…the supply of Basecoin tokens in response to changes in, for example, the Basecoin-USD exchange rate… implementing a monetary policy similar to that executed by central banks around the world”.

Two points.

First, this is not how central banks manage the money supply. . . .

But of course, Basecoin isn’t actually creating a monetary supply, which central banks will into existence and then use to buy assets, primarily debt securities. Basecoin works by creating an investable asset which the “central bank” (i.e. the algorithm, because it’s nothing like a central bank) issues to holders of the tokens which those token holders then sell to new entrants into the scheme.

Buying assets to create money vs. selling assets to obtain money. There’s a big difference.

Byrne, of course, is correct that there is a big difference between the buying of assets to create money and the selling of assets to obtain money by promoters of a cryptocurrency. But the assets being sold to create money are created by the promoters of the money-issuing concern to accumulate the working capital that the promoters are planning to use in creating their currency, so the comparison between buying assets to create money and selling assets to obtain money is not exactly on point.

What Byrne is missing is that the central bank can take the demand for its currency as more or less given, a kind of economic fact of nature, though the exact explanation of that fact remains disturbingly elusive. The goal of a cryptocurrency promoter, however, is to create a demand for its currency that doesn’t already exist. That is above all a marketing and PR challenge. (Actually, a challenge that has been rather successfully met, though for Bitcoins at any rate the operational challenge of creating a viable currency to meet the newly created demand seems logically impossible.)

Second,

We need to talk about how a peg does and doesn’t work. . . .

Currently there are very efficient ways to peg the price of something to something else, let’s say (to keep it simple) $1. The first of these would be to execute a trust deed (cost: $0) saying that some entity, e.g. a bank, holds a set sum of money, say $1 billion, on trust absolutely for the holders of a token, which let’s call Dollarcoin for present purposes. If the token is redeemable at par from that bank (qua Trustee and not as depository), then the token ought to trade at close to $1, with perhaps a slight discount depending on the insolvency risk to which a Dollarcoin holder is exposed (although there are well-worn methods to keep the underlying dollars insolvency-remote, i.e. insulated from the risk of a collapse of that bank).

Put another way, there is a way to turn 1 dollarcoin into a $1 here [sic]. Easy-peasy, no questions asked, with ancient technology like paper and pens or SQL tables. The downside of course is that you need to 100% cash collateralize the system, which is (from a cost of capital perspective) rather expensive. This is the reason why fractional reserve banking exists.

The mistake here is that 100% cash collateralization is not required for convertibility and parity. Under the gold-standard, the convertibility of various national currencies into gold at fixed parities was maintained with far less than 100% gold cover against those currencies, and commercial banks and money-market funds routinely maintain the convertibility of deposits into currency at one-to-one parities with far less than 100% currency reserves against deposits. Sometimes convertibility in such systems breaks down temporarily, but such breakdowns are neither necessary nor inevitable, though they may sometimes, given the alternatives, be the best available option. I understand that banks undertake a legal obligation to convert deposits into currency at a one-to-one rate, but such a legal obligation is not the only possible legal rule under which banks could operate. The Bank of England during the legal restriction of convertibility of its Banknotes into gold from 1797 to 1819, was operating without any legal obligation to convert its Banknotes into gold, though it was widely expected at some future date convertibility would be resumed.

While I am completely sympathetic to Byrne’s skepticism about the viability of cryptocurrencies, even cryptocurrencies with some kind of formal or informal peg with respect to an actual currency like the dollar, he seems to think that because there are circumstances under which the currencies will fail, he has shown that it is impossible for the currencies ever to succeed. I believe that it would be a stretch for a currency like the Basecoin to be successful, but one can at least imagine a set of circumstances under which, in contrast to the Bitcoin, the Basecoin could be successful, though even under the rosiest possible scenario I can’t imagine how the Basecoin or any other cryptocurrency could displace the dollar as the world’s dominant currency. To be sure, success of the Basecoin or some other “stabililzed” cryptocurrency is a long-shot, but success is not logically self-contradictory. Sometimes a prophecy, however improbable, can be self-fulfilling.

Milton Friedman and the Phillips Curve

In December 1967, Milton Friedman delivered his Presidential Address to the American Economic Association in Washington DC. In those days the AEA met in the week between Christmas and New Years, in contrast to the more recent practice of holding the convention in the week after New Years. That’s why the anniversary of Friedman’s 1967 address was celebrated at the 2018 AEA convention. A special session was dedicated to commemoration of that famous address, published in the March 1968 American Economic Review, and fittingly one of the papers at the session as presented by the outgoing AEA president Olivier Blanchard, who also wrote one of the papers discussed at the session. Other papers were written by Thomas Sargent and Robert Hall, and by Greg Mankiw and Ricardo Reis. The papers were discussed by Lawrence Summers, Eric Nakamura, and Stanley Fischer. An all-star cast.

Maybe in a future post, I will comment on the papers presented in the Friedman session, but in this post I want to discuss a point that has been generally overlooked, not only in the three “golden” anniversary papers on Friedman and the Phillips Curve, but, as best as I can recall, in all the commentaries I’ve seen about Friedman and the Phillips Curve. The key point to understand about Friedman’s address is that his argument was basically an extension of the idea of monetary neutrality, which says that the real equilibrium of an economy corresponds to a set of relative prices that allows all agents simultaneously to execute their optimal desired purchases and sales conditioned on those relative prices. So it is only relative prices, not absolute prices, that matter. Taking an economy in equilibrium, if you were suddenly to double all prices, relative prices remaining unchanged, the equilibrium would be preserved and the economy would proceed exactly – and optimally – as before as if nothing had changed. (There are some complications about what is happening to the quantity of money in this thought experiment that I am skipping over.) On the other hand, if you change just a single price, not only would the market in which that price is determined be disequilibrated, at least one, and potentially more than one, other market would be disequilibrated. The point here is that the real economy rules, and equilibrium in the real economy depends on relative, not absolute, prices.

What Friedman did was to argue that if money is neutral with respect to changes in the price level, it should also be neutral with respect to changes in the rate of inflation. The idea that you can wring some extra output and employment out of the economy just by choosing to increase the rate of inflation goes against the grain of two basic principles: (1) monetary neutrality (i.e., the real equilibrium of the economy is determined solely by real factors) and (2) Friedman’s famous non-existence (of a free lunch) theorem. In other words, you can’t make the economy as a whole better off just by printing money.

Or can you?

Actually you can, and Friedman himself understood that you can, but he argued that the possibility of making the economy as a whole better of (in the sense of increasing total output and employment) depends crucially on whether inflation is expected or unexpected. Only if inflation is not expected does it serve to increase output and employment. If inflation is correctly expected, the neutrality principle reasserts itself so that output and employment are no different from what they would have been had prices not changed.

What that means is that policy makers (monetary authorities) can cause output and employment to increase by inflating the currency, as implied by the downward-sloping Phillips Curve, but that simply reflects that actual inflation exceeds expected inflation. And, sure, the monetary authorities can always surprise the public by raising the rate of inflation above the rate expected by the public , but that doesn’t mean that the public can be perpetually fooled by a monetary authority determined to keep inflation higher than expected. If that is the strategy of the monetary authorities, it will lead, sooner or later, to a very unpleasant outcome.

So, in any time period – the length of the time period corresponding to the time during which expectations are given – the short-run Phillips Curve for that time period is downward-sloping. But given the futility of perpetually delivering higher than expected inflation, the long-run Phillips Curve from the point of view of the monetary authorities trying to devise a sustainable policy must be essentially vertical.

Two quick parenthetical remarks. Friedman’s argument was far from original. Many critics of Keynesian policies had made similar arguments; the names Hayek, Haberler, Mises and Viner come immediately to mind, but the list could easily be lengthened. But the earliest version of the argument of which I am aware is Hayek’s 1934 reply in Econometrica to a discussion of Prices and Production by Alvin Hansen and Herbert Tout in their 1933 article reviewing recent business-cycle literature in Econometrica in which they criticized Hayek’s assertion that a monetary expansion that financed investment spending in excess of voluntary savings would be unsustainable. They pointed out that there was nothing to prevent the monetary authority from continuing to create money, thereby continually financing investment in excess of voluntary savings. Hayek’s reply was that a permanent constant rate of monetary expansion would not suffice to permanently finance investment in excess of savings, because once that monetary expansion was expected, prices would adjust so that in real terms the constant flow of monetary expansion would correspond to the same amount of investment that had been undertaken prior to the first and unexpected round of monetary expansion. To maintain a rate of investment permanently in excess of voluntary savings would require progressively increasing rates of monetary expansion over and above the expected rate of monetary expansion, which would sooner or later prove unsustainable. The gist of the argument, more than three decades before Friedman’s 1967 Presidential address, was exactly the same as Friedman’s.

A further aside. But what Hayek failed to see in making this argument was that, in so doing, he was refuting his own argument in Prices and Production that only a constant rate of total expenditure and total income is consistent with maintenance of a real equilibrium in which voluntary saving and planned investment are equal. Obviously, any rate of monetary expansion, if correctly foreseen, would be consistent with a real equilibrium with saving equal to investment.

My second remark is to note the ambiguous meaning of the short-run Phillips Curve relationship. The underlying causal relationship reflected in the negative correlation between inflation and unemployment can be understood either as increases in inflation causing unemployment to go down, or as increases in unemployment causing inflation to go down. Undoubtedly the causality runs in both directions, but subtle differences in the understanding of the causal mechanism can lead to very different policy implications. Usually the Keynesian understanding of the causality is that it runs from unemployment to inflation, while a more monetarist understanding treats inflation as a policy instrument that determines (with expected inflation treated as a parameter) at least directionally the short-run change in the rate of unemployment.

Now here is the main point that I want to make in this post. The standard interpretation of the Friedman argument is that since attempts to increase output and employment by monetary expansion are futile, the best policy for a monetary authority to pursue is a stable and predictable one that keeps the economy at or near the optimal long-run growth path that is determined by real – not monetary – factors. Thus, the best policy is to find a clear and predictable rule for how the monetary authority will behave, so that monetary mismanagement doesn’t inadvertently become a destabilizing force causing the economy to deviate from its optimal growth path. In the 50 years since Friedman’s address, this message has been taken to heart by monetary economists and monetary authorities, leading to a broad consensus in favor of inflation targeting with the target now almost always set at 2% annual inflation. (I leave aside for now the tricky question of what a clear and predictable monetary rule would look like.)

But this interpretation, clearly the one that Friedman himself drew from his argument, doesn’t actually follow from the argument that monetary expansion can’t affect the long-run equilibrium growth path of an economy. The monetary neutrality argument, being a pure comparative-statics exercise, assumes that an economy, starting from a position of equilibrium, is subjected to a parametric change (either in the quantity of money or in the price level) and then asks what will the new equilibrium of the economy look like? The answer is: it will look exactly like the prior equilibrium, except that the price level will be twice as high with twice as much money as previously, but with relative prices unchanged. The same sort of reasoning, with appropriate adjustments, can show that changing the expected rate of inflation will have no effect on the real equilibrium of the economy, with only the rate of inflation and the rate of monetary expansion affected.

This comparative-statics exercise teaches us something, but not as much as Friedman and his followers thought. True, you can’t get more out of the economy – at least not for very long – than its real equilibrium will generate. But what if the economy is not operating at its real equilibrium? Even Friedman didn’t believe that the economy always operates at its real equilibrium. Just read his Monetary History of the United States. Real-business cycle theorists do believe that the economy always operates at its real equilibrium, but they, unlike Friedman, think monetary policy is useless, so we can forget about them — at least for purposes of this discussion. So if we have reason to think that the economy is falling short of its real equilibrium, as almost all of us believe that it sometimes does, why should we assume that monetary policy might not nudge the economy in the direction of its real equilibrium?

The answer to that question is not so obvious, but one answer might be that if you use monetary policy to move the economy toward its real equilibrium, you might make mistakes sometimes and overshoot the real equilibrium and then bad stuff would happen and inflation would run out of control, and confidence in the currency would be shattered, and you would find yourself in a re-run of the horrible 1970s. I get that argument, and it is not totally without merit, but I wouldn’t characterize it as overly compelling. On a list of compelling arguments, I would put it just above, or possibly just below, the domino theory on the basis of which the US fought the Vietnam War.

But even if the argument is not overly compelling, it should not be dismissed entirely, so here is a way of taking it into account. Just for fun, I will call it a Taylor Rule for the Inflation Target (IT). Let us assume that the long-run inflation target is 2% and let us say that (YY*) is the output gap between current real GDP and potential GDP (i.e., the GDP corresponding to the real equilibrium of the economy). We could then define the following Taylor Rule for the inflation target:

IT = α(2%) + β((YY*)/ Y*).

This equation says that the inflation target in any period would be a linear combination of the default Inflation Target of 2% times an adjustment coefficient α designed to keep successively chosen Inflation targets from deviating from the long-term price-level-path corresponding to 2% annual inflation and some fraction β of the output gap expressed as a percentage of potential GDP. Thus, for example, if the output gap was -0.5% and β was 0.5, the short-term Inflation Target would be raised to 4.5% if α were 1.

However, if on average output gaps are expected to be negative, then α would have to be chosen to be less than 1 in order for the actual time path of the price level to revert back to a target price-level corresponding to a 2% annual rate.

Such a procedure would fit well with the current dual inflation and employment mandate of the Federal Reserve. The long-term price level path would correspond to the price-stability mandate, while the adjustable short-term choice of the IT would correspond to and promote the goal of maximum employment by raising the inflation target when unemployment was high as a countercyclical policy for promoting recovery. But short-term changes in the IT would not be allowed to cause a long-term deviation of the price level from its target path. The dual mandate would ensure that relatively higher inflation in periods of high unemployment would be compensated for by periods of relatively low inflation in periods of low unemployment.

Alternatively, you could just target nominal GDP at a rate consistent with a long-run average 2% inflation target for the price level, with the target for nominal GDP adjusted over time as needed to ensure that the 2% average inflation target for the price level was also maintained.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,751 other followers

Follow Uneasy Money on WordPress.com
Advertisements