Archive for the 'Phillips Curve' Category

The Phillips Curve and the Lucas Critique

With unemployment at the lowest levels since the start of the millennium (initial unemployment claims in February were the lowest since 1973!), lots of people are starting to wonder if we might be headed for a pick-up in the rate of inflation, which has been averaging well under 2% a year since the financial crisis of September 2008 ushered in the Little Depression of 2008-09 and beyond. The Fed has already signaled its intention to continue raising interest rates even though inflation remains well anchored at rates below the Fed’s 2% target. And among Fed watchers and Fed cognoscenti, the only question being asked is not whether the Fed will raise its Fed Funds rate target, but how frequent those (presumably) quarter-point increments will be.

The prevailing view seems to be that the thought process of the Federal Open Market Committee (FOMC) in raising interest rates — even before there is any real evidence of an increase in an inflation rate that is still below the Fed’s 2% target — is that a preemptive strike is required to prevent inflation from accelerating and rising above what has become an inflation ceiling — not an inflation target — of 2%.

Why does the Fed believe that inflation is going to rise? That’s what the econoblogosphere has, of late, been trying to figure out. And the consensus seems to be that the FOMC is basing its assessment that the risk that inflation will break the 2% ceiling that it has implicitly adopted has become unacceptably high. That risk assessment is based on some sort of analysis in which it is inferred from the Phillips Curve that, with unemployment nearing historically low levels, rising inflation has become dangerously likely. And so the next question is: why is the FOMC fretting about the Phillips Curve?

In a blog post earlier this week, David Andolfatto of the St. Louis Federal Reserve Bank, tried to spell out in some detail the kind of reasoning that lay behind the FOMC decision to actively tighten the stance of monetary policy to avoid any increase in inflation. At the same time, Andolfatto expressed his own view, that the rate of inflation is not determined by the rate of unemployment, but by the stance of monetary policy.

Andolfatto’s avowal of monetarist faith in the purely monetary forces that govern the rate of inflation elicited a rejoinder from Paul Krugman expressing considerable annoyance at Andolfatto’s monetarism.

Here are three questions about inflation, unemployment, and Fed policy. Some people may imagine that they’re the same question, but they definitely aren’t:

  1. Does the Fed know how low the unemployment rate can go?
  2. Should the Fed be tightening now, even though inflation is still low?
  3. Is there any relationship between unemployment and inflation?

It seems obvious to me that the answer to (1) is no. We’re currently well above historical estimates of full employment, and inflation remains subdued. Could unemployment fall to 3.5% without accelerating inflation? Honestly, we don’t know.

Agreed.

I would also argue that the Fed is making a mistake by tightening now, for several reasons. One is that we really don’t know how low U can go, and won’t find out if we don’t give it a chance. Another is that the costs of getting it wrong are asymmetric: waiting too long to tighten might be awkward, but tightening too soon increases the risks of falling back into a liquidity trap. Finally, there are very good reasons to believe that the Fed’s 2 percent inflation target is too low; certainly the belief that it was high enough to make the zero lower bound irrelevant has been massively falsified by experience.

Agreed, but the better approach would be to target the price level, or even better nominal GDP, so that short-term undershooting of the inflation target would provide increased leeway to allow inflation to overshoot the inflation target without undermining the credibility of the commitment to price stability.

But should we drop the whole notion that unemployment has anything to do with inflation? Via FTAlphaville, I see that David Andolfatto is at it again, asserting that there’s something weird about asserting an unemployment-inflation link, and that inflation is driven by an imbalance between money supply and money demand.

But one can fully accept that inflation is driven by an excess supply of money without denying that there is a link between inflation and unemployment. In the normal course of events an excess supply of money may lead to increased spending as people attempt to exchange their excess cash balances for real goods and services. The increased spending can induce additional output and additional employment along with rising prices. The reverse happens when there is an excess demand for cash balances and people attempt to build up their cash holdings by cutting back their spending, reducing output. So the inflation unemployment relationship results from the effects induced by a particular causal circumstance. Nor does that mean that an imbalance in the supply of money is the only cause of inflation or price level changes.

Inflation can also result from nothing more than the anticipation of inflation. Expected inflation can also affect output and employment, so inflation and unemployment are related not only by both being affected by excess supply of (demand for) money, but by both being affect by expected inflation.

Even if you think that inflation is fundamentally a monetary phenomenon (which you shouldn’t, as I’ll explain in a minute), wage- and price-setters don’t care about money demand; they care about their own ability or lack thereof to charge more, which has to – has to – involve the amount of slack in the economy. As Karl Smith pointed out a decade ago, the doctrine of immaculate inflation, in which money translates directly into inflation – a doctrine that was invoked to predict inflationary consequences from Fed easing despite a depressed economy – makes no sense.

There’s no reason for anyone to care about overall money demand in this scenario. Price setters respond to the perceived change in the rate of spending induced by an excess supply of money. (I note parenthetically, that I am referring now to an excess supply of base money, not to an excess supply of bank-created money, which, unlike base money, is not a hot potato that cannot be withdrawn from circulation in response to market incentives.) Now some price setters may actually use macroeconomic information to forecast price movements, but recognizing that channel would take us into the realm of an expectations-theory of inflation, not the strict monetary theory of inflation that Krugman is criticizing.

And the claim that there’s weak or no evidence of a link between unemployment and inflation is sustainable only if you insist on restricting yourself to recent U.S. data. Take a longer and broader view, and the evidence is obvious.

Consider, for example, the case of Spain. Inflation in Spain is definitely not driven by monetary factors, since Spain hasn’t even had its own money since it joined the euro. Nonetheless, there have been big moves in both Spanish inflation and Spanish unemployment:

That period of low unemployment, by Spanish standards, was the result of huge inflows of capital, fueling a real estate bubble. Then came the sudden stop after the Greek crisis, which sent unemployment soaring.

Meanwhile, the pre-crisis era was marked by relatively high inflation, well above the euro-area average; the post-crisis era by near-zero inflation, below the rest of the euro area, allowing Spain to achieve (at immense cost) an “internal devaluation” that has driven an export-led recovery.

So, do you really want to claim that the swings in inflation had nothing to do with the swings in unemployment? Really, really?

No one claims – at least no one who believes in a monetary theory of inflation — should claim that swings in inflation and unemployment are unrelated, but to acknowledge the relationship between inflation and unemployment does not entail acceptance of the proposition that unemployment is a causal determinant of inflation.

But if you concede that unemployment had a lot to do with Spanish inflation and disinflation, you’ve already conceded the basic logic of the Phillips curve. You may say, with considerable justification, that U.S. data are too noisy to have any confidence in particular estimates of that curve. But denying that it makes sense to talk about unemployment driving inflation is foolish.

No it’s not foolish, because the relationship between inflation and unemployment is not a causal relationship; it’s a coincidental relationship. The level of employment depends on many things and some of the things that employment depends on also affect inflation. That doesn’t mean that employment causally affects inflation.

When I read Krugman’s post and the Andalfatto post that provoked Krugman, it occurred to me that the way to summarize all of this is to say that unemployment and inflation are determined by a variety of deep structural (causal) relationships. The Phillips Curve, although it was once fashionable to refer to it as the missing equation in the Keynesian model, is not a structural relationship; it is a reduced form. The negative relationship between unemployment and inflation that is found by empirical studies does not tell us that high unemployment reduces inflation, any more than a positive empirical relationship between the price of a commodity and the quantity sold would tell you that the demand curve for that product is positively sloped.

It may be interesting to know that there is a negative empirical relationship between inflation and unemployment, but we can’t rely on that relationship in making macroeconomic policy. I am not a big admirer of the Lucas Critique for reasons that I have discussed in other posts (e.g., here and here). But, the Lucas Critique, a rather trivial result that was widely understood even before Lucas took ownership of the idea, does at least warn us not to confuse a reduced form with a causal relationship.

Advertisements

Milton Friedman and the Phillips Curve

In December 1967, Milton Friedman delivered his Presidential Address to the American Economic Association in Washington DC. In those days the AEA met in the week between Christmas and New Years, in contrast to the more recent practice of holding the convention in the week after New Years. That’s why the anniversary of Friedman’s 1967 address was celebrated at the 2018 AEA convention. A special session was dedicated to commemoration of that famous address, published in the March 1968 American Economic Review, and fittingly one of the papers at the session as presented by the outgoing AEA president Olivier Blanchard, who also wrote one of the papers discussed at the session. Other papers were written by Thomas Sargent and Robert Hall, and by Greg Mankiw and Ricardo Reis. The papers were discussed by Lawrence Summers, Eric Nakamura, and Stanley Fischer. An all-star cast.

Maybe in a future post, I will comment on the papers presented in the Friedman session, but in this post I want to discuss a point that has been generally overlooked, not only in the three “golden” anniversary papers on Friedman and the Phillips Curve, but, as best as I can recall, in all the commentaries I’ve seen about Friedman and the Phillips Curve. The key point to understand about Friedman’s address is that his argument was basically an extension of the idea of monetary neutrality, which says that the real equilibrium of an economy corresponds to a set of relative prices that allows all agents simultaneously to execute their optimal desired purchases and sales conditioned on those relative prices. So it is only relative prices, not absolute prices, that matter. Taking an economy in equilibrium, if you were suddenly to double all prices, relative prices remaining unchanged, the equilibrium would be preserved and the economy would proceed exactly – and optimally – as before as if nothing had changed. (There are some complications about what is happening to the quantity of money in this thought experiment that I am skipping over.) On the other hand, if you change just a single price, not only would the market in which that price is determined be disequilibrated, at least one, and potentially more than one, other market would be disequilibrated. The point here is that the real economy rules, and equilibrium in the real economy depends on relative, not absolute, prices.

What Friedman did was to argue that if money is neutral with respect to changes in the price level, it should also be neutral with respect to changes in the rate of inflation. The idea that you can wring some extra output and employment out of the economy just by choosing to increase the rate of inflation goes against the grain of two basic principles: (1) monetary neutrality (i.e., the real equilibrium of the economy is determined solely by real factors) and (2) Friedman’s famous non-existence (of a free lunch) theorem. In other words, you can’t make the economy as a whole better off just by printing money.

Or can you?

Actually you can, and Friedman himself understood that you can, but he argued that the possibility of making the economy as a whole better of (in the sense of increasing total output and employment) depends crucially on whether inflation is expected or unexpected. Only if inflation is not expected does it serve to increase output and employment. If inflation is correctly expected, the neutrality principle reasserts itself so that output and employment are no different from what they would have been had prices not changed.

What that means is that policy makers (monetary authorities) can cause output and employment to increase by inflating the currency, as implied by the downward-sloping Phillips Curve, but that simply reflects that actual inflation exceeds expected inflation. And, sure, the monetary authorities can always surprise the public by raising the rate of inflation above the rate expected by the public , but that doesn’t mean that the public can be perpetually fooled by a monetary authority determined to keep inflation higher than expected. If that is the strategy of the monetary authorities, it will lead, sooner or later, to a very unpleasant outcome.

So, in any time period – the length of the time period corresponding to the time during which expectations are given – the short-run Phillips Curve for that time period is downward-sloping. But given the futility of perpetually delivering higher than expected inflation, the long-run Phillips Curve from the point of view of the monetary authorities trying to devise a sustainable policy must be essentially vertical.

Two quick parenthetical remarks. Friedman’s argument was far from original. Many critics of Keynesian policies had made similar arguments; the names Hayek, Haberler, Mises and Viner come immediately to mind, but the list could easily be lengthened. But the earliest version of the argument of which I am aware is Hayek’s 1934 reply in Econometrica to a discussion of Prices and Production by Alvin Hansen and Herbert Tout in their 1933 article reviewing recent business-cycle literature in Econometrica in which they criticized Hayek’s assertion that a monetary expansion that financed investment spending in excess of voluntary savings would be unsustainable. They pointed out that there was nothing to prevent the monetary authority from continuing to create money, thereby continually financing investment in excess of voluntary savings. Hayek’s reply was that a permanent constant rate of monetary expansion would not suffice to permanently finance investment in excess of savings, because once that monetary expansion was expected, prices would adjust so that in real terms the constant flow of monetary expansion would correspond to the same amount of investment that had been undertaken prior to the first and unexpected round of monetary expansion. To maintain a rate of investment permanently in excess of voluntary savings would require progressively increasing rates of monetary expansion over and above the expected rate of monetary expansion, which would sooner or later prove unsustainable. The gist of the argument, more than three decades before Friedman’s 1967 Presidential address, was exactly the same as Friedman’s.

A further aside. But what Hayek failed to see in making this argument was that, in so doing, he was refuting his own argument in Prices and Production that only a constant rate of total expenditure and total income is consistent with maintenance of a real equilibrium in which voluntary saving and planned investment are equal. Obviously, any rate of monetary expansion, if correctly foreseen, would be consistent with a real equilibrium with saving equal to investment.

My second remark is to note the ambiguous meaning of the short-run Phillips Curve relationship. The underlying causal relationship reflected in the negative correlation between inflation and unemployment can be understood either as increases in inflation causing unemployment to go down, or as increases in unemployment causing inflation to go down. Undoubtedly the causality runs in both directions, but subtle differences in the understanding of the causal mechanism can lead to very different policy implications. Usually the Keynesian understanding of the causality is that it runs from unemployment to inflation, while a more monetarist understanding treats inflation as a policy instrument that determines (with expected inflation treated as a parameter) at least directionally the short-run change in the rate of unemployment.

Now here is the main point that I want to make in this post. The standard interpretation of the Friedman argument is that since attempts to increase output and employment by monetary expansion are futile, the best policy for a monetary authority to pursue is a stable and predictable one that keeps the economy at or near the optimal long-run growth path that is determined by real – not monetary – factors. Thus, the best policy is to find a clear and predictable rule for how the monetary authority will behave, so that monetary mismanagement doesn’t inadvertently become a destabilizing force causing the economy to deviate from its optimal growth path. In the 50 years since Friedman’s address, this message has been taken to heart by monetary economists and monetary authorities, leading to a broad consensus in favor of inflation targeting with the target now almost always set at 2% annual inflation. (I leave aside for now the tricky question of what a clear and predictable monetary rule would look like.)

But this interpretation, clearly the one that Friedman himself drew from his argument, doesn’t actually follow from the argument that monetary expansion can’t affect the long-run equilibrium growth path of an economy. The monetary neutrality argument, being a pure comparative-statics exercise, assumes that an economy, starting from a position of equilibrium, is subjected to a parametric change (either in the quantity of money or in the price level) and then asks what will the new equilibrium of the economy look like? The answer is: it will look exactly like the prior equilibrium, except that the price level will be twice as high with twice as much money as previously, but with relative prices unchanged. The same sort of reasoning, with appropriate adjustments, can show that changing the expected rate of inflation will have no effect on the real equilibrium of the economy, with only the rate of inflation and the rate of monetary expansion affected.

This comparative-statics exercise teaches us something, but not as much as Friedman and his followers thought. True, you can’t get more out of the economy – at least not for very long – than its real equilibrium will generate. But what if the economy is not operating at its real equilibrium? Even Friedman didn’t believe that the economy always operates at its real equilibrium. Just read his Monetary History of the United States. Real-business cycle theorists do believe that the economy always operates at its real equilibrium, but they, unlike Friedman, think monetary policy is useless, so we can forget about them — at least for purposes of this discussion. So if we have reason to think that the economy is falling short of its real equilibrium, as almost all of us believe that it sometimes does, why should we assume that monetary policy might not nudge the economy in the direction of its real equilibrium?

The answer to that question is not so obvious, but one answer might be that if you use monetary policy to move the economy toward its real equilibrium, you might make mistakes sometimes and overshoot the real equilibrium and then bad stuff would happen and inflation would run out of control, and confidence in the currency would be shattered, and you would find yourself in a re-run of the horrible 1970s. I get that argument, and it is not totally without merit, but I wouldn’t characterize it as overly compelling. On a list of compelling arguments, I would put it just above, or possibly just below, the domino theory on the basis of which the US fought the Vietnam War.

But even if the argument is not overly compelling, it should not be dismissed entirely, so here is a way of taking it into account. Just for fun, I will call it a Taylor Rule for the Inflation Target (IT). Let us assume that the long-run inflation target is 2% and let us say that (YY*) is the output gap between current real GDP and potential GDP (i.e., the GDP corresponding to the real equilibrium of the economy). We could then define the following Taylor Rule for the inflation target:

IT = α(2%) + β((YY*)/ Y*).

This equation says that the inflation target in any period would be a linear combination of the default Inflation Target of 2% times an adjustment coefficient α designed to keep successively chosen Inflation targets from deviating from the long-term price-level-path corresponding to 2% annual inflation and some fraction β of the output gap expressed as a percentage of potential GDP. Thus, for example, if the output gap was -0.5% and β was 0.5, the short-term Inflation Target would be raised to 4.5% if α were 1.

However, if on average output gaps are expected to be negative, then α would have to be chosen to be less than 1 in order for the actual time path of the price level to revert back to a target price-level corresponding to a 2% annual rate.

Such a procedure would fit well with the current dual inflation and employment mandate of the Federal Reserve. The long-term price level path would correspond to the price-stability mandate, while the adjustable short-term choice of the IT would correspond to and promote the goal of maximum employment by raising the inflation target when unemployment was high as a countercyclical policy for promoting recovery. But short-term changes in the IT would not be allowed to cause a long-term deviation of the price level from its target path. The dual mandate would ensure that relatively higher inflation in periods of high unemployment would be compensated for by periods of relatively low inflation in periods of low unemployment.

Alternatively, you could just target nominal GDP at a rate consistent with a long-run average 2% inflation target for the price level, with the target for nominal GDP adjusted over time as needed to ensure that the 2% average inflation target for the price level was also maintained.

Rules vs. Discretion Historically Contemplated

Here is a new concluding section which I have just written for my paper “Rules versus Discretion in Monetary Policy: Historically Contemplated” which I spoke about last September at the Mercatus Confernce on Monetary Rules in a Post-Crisis World. I have been working a lot on the paper over the past month or so and I hope to post a draft soon on SSRN and it is now under review for publication. I apologize for having written very little in past month and for having failed to respond to any comments on my previous posts. I simply have been too busy with work and life to have any energy left for blogging. I look forward to being more involved in the blog over the next few months and expect to be posting some sections of a couple of papers I am going to be writing. But I’m offering no guarantees. It is gratifying to know that people are still visiting the blog and reading some of my old posts.

Although recognition of a need for some rule to govern the conduct of the monetary authority originated in the perceived incentive of the authority to opportunistically abuse its privileged position, the expectations of the public (including that small, but modestly influential, segment consisting of amateur and professional economists) about what monetary rules might actually accomplish have evolved and expanded over the course of the past two centuries. As Laidler (“Economic Ideas, the Monetary Order, and the Uneasy Case for Monetary Rules”) shows, that evolution has been driven by both the evolution of economic and monetary institutions and the evolution of economic and monetary doctrines about how those institutions work.

I distinguish between two types of rules: price rules and quantity rules. The simplest price rule involved setting the price of a commodity – usually gold or silver – in terms of a monetary unit whose supply was controlled by the monetary authority or defining a monetary unit as a specific quantity of a particular commodity. Under the classical gold standard, for example, the monetary authority stood ready to buy or sell gold on demand at legally determined price of gold in terms of the monetary unit. Thus, the fixed price of gold under the gold standard was originally thought to serve as both the policy target of the rule and the operational instrument for implementing the rule.

However, as monetary institutions and theories evolved, it became apparent that there were policy objectives other than simply maintaining the convertibility of the monetary unit into the standard commodity that required the attention of the monetary authority. The first attempt to impose an additional policy goal on a monetary authority was the Bank Charter Act of 1844 which specified a quantity target – the aggregate of banknotes in circulation in Britain – which the monetary authority — the Bank of England – was required to reach by following a simple mechanical rule. By imposing a 100-percent marginal gold-reserve requirement on the notes issued by the Bank of England, the Bank Charter Act made the quantity of banknotes issued by the Bank of England both the target of the quantity rule and the instrument by which the rule was implemented.

Owing to deficiencies in the monetary theory on the basis of which the Act was designed and to the evolution of British monetary practices and institution, the conceptual elegance of the Bank Charter Act was not matched by its efficacy in practice. But despite, or, more likely, because of, the ultimate failure of Bank Charter Act, the gold standard, surviving recurring financial crises in Great Britain in the middle third of the nineteenth century, was eventually adopted by many other countries in the 1870s, becoming the de facto international monetary system from the late 1870s until the start of World War I. Operation of the gold standard was defined by, and depended on, the observance of a single price rule in which the value of a currency was defined by its legal gold content, so that corresponding to each gold-standard currency, there was an official gold price at which the monetary authority was obligated to buy or sell gold on demand.

The value – the purchasing power — of gold was relatively stable in the 35 or so years of the gold standard era, but that stability could not survive the upheavals associated with World War I, and so the problem of reconstructing the postwar monetary system was what kind of monetary rule to adopt to govern the post-war economy. Was it enough merely to restore the old currency parities – perhaps adjusted for differences in the extent of wartime and postwar currency depreciation — that governed the classical gold standard, or was it necessary to take into account other factors, e.g., the purchasing power of gold, in restoring the gold standard? This basic conundrum was never satisfactorily answered, and the failure to do so undoubtedly was a contributing, and perhaps dominant, factor in the economic collapse that began at the end of 1929, ultimately leading to the abandonment of the gold standard.

Searching for a new monetary regime to replace the failed gold standard, but to some extent inspired by the Bank Charter Act of the previous century, Henry Simons and ten fellow University of Chicago economists devised a totally new monetary system based on 100-percent reserve banking. The original Chicago proposal for 100-percent reserve banking proposed a monetary rule for stabilizing the purchasing power of fiat money. The 100-percent banking proposal would give the monetary authority complete control over the quantity of money, thereby enhancing the power of the monetary authority to achieve its price-level target. The Chicago proposal was thus inspired by a desire to increase the likelihood that the monetary authority could successfully implement the desired price rule. The price level was the target, and the quantity of money was the instrument. But as long as private fractional-reserve banks remained in operation, the monetary authority would lack effective control over the instrument. That was the rationale for replacing fractional reserve banks with 100-percent reserve banks.

But Simons eventually decided in his paper (“Rules versus Authorities in Monetary Policy”) that a price-level target was undesirable in principle, because allowing the monetary authority to choose which price level to stabilize, thereby favoring some groups at the expense of others, would grant too much discretion to the monetary authority. Rejecting price-level stabilization as monetary rule, Simons concluded that the exercise of discretion could be avoided only if the quantity of money was the target as well as the instrument of a monetary rule. Simons’s ideal monetary rule was therefore to keep the quantity of money in the economy constant — forever. But having found the ideal rule, Simons immediately rejected it, because he realized that the reforms in the financial and monetary systems necessary to make such a rule viable over the long run would never be adopted. And so he reluctantly and unhappily reverted back to the price-level stabilization rule that he and his Chicago colleagues had proposed in 1933.

Simons’s student Milton Friedman continued to espouse his teacher’s opposition to discretion, and as late as 1959 (A Program for Monetary Stability) he continued to advocate 100-percent reserve banking. But in the early 1960s, he adopted his k-percent rule and gave up his support for 100-percent banking. But despite giving up on 100-percent banking, Friedman continued to argue that the k-percent rule was less discretionary than the gold standard or a price-level rule, because neither the gold standard nor a price-level rule eliminated the exercise of discretion by the monetary authority in its implementation of policy, failing to acknowledge that, under any of the definitions that he used (usually M1 and sometimes M2), the quantity of money was a target, not an instrument. Of course, Friedman did eventually abandon his k-percent rule, but that acknowledgment came at least a decade after almost everyone else had recognized its unsuitability as a guide for conducting monetary policy, let alone as a legally binding rule, and long after Friedman’s repeated predictions that rapid growth of the monetary aggregates in the 1980s presaged the return of near-double-digit inflation.

However, the work of Kydland and Prescott (“Rules Rather than Discretion: The Inconsistency of Optimal Plans”) on time inconsistency has provided an alternative basis on which argue against discretion: that the lack of commitment to a long-run policy would lead to self-defeating short-term attempts to deviate from the optimal long-term policy.[1]

It is now I think generally understood that a monetary authority has available to it four primary instruments in conducting monetary policy, the quantity of base money, the lending rate it charges to banks, the deposit rate it pays banks on reserves, and an exchange rate against some other currency or some asset. A variety of goals remain available as well, nominal goals like inflation, the price level, or nominal income, or even an index of stock prices, as well as real goals like real GDP and employment.

Ever since Friedman and Phelps independently argued that the long-run Phillips Curve is vertical, a consensus has developed that countercyclical monetary policy is basically ineffectual, because the effects of countercyclical policy will be anticipated so that the only long-run effect of countercyclical policy is to raise the average rate of inflation without affecting output and employment in the long run. Because the reasoning that generates this result is essentially that money is neutral in the long run, the reasoning is not as compelling as the professional consensus in its favor would suggest. The monetary neutrality result only applies under the very special assumptions of a comparative static exercise comparing an initial equilibrium with a final equilibrium. But the whole point of countercyclical policy is to speed the adjustment from a disequilbrium with high unemployment back to a low-unemployment equilibrium. A comparative-statics exercise provides no theoretical, much less empirical, support for the proposition that anticipated monetary policy cannot have real effects.

So the range of possible targets and the range of possible instruments now provide considerable latitude to supporters of monetary rules to recommend alternative monetary rules incorporating many different combinations of alternative instruments and alternative targets. As of now, we have arrived at few solid theoretical conclusions about the relative effectiveness of alternative rules and even less empirical evidence about their effectiveness. But at least we know that, to be viable, a monetary rule will almost certainly have to be expressed in terms of one or more targets while allowing the monetary authority at least some discretion to adjust its control over its chosen instruments in order to effectively achieve its target (McCallum 1987, 1988). That does not seem like a great deal of progress to have made in the two centuries since economists began puzzling over how to construct an appropriate rule to govern the behavior of the monetary authority, but it is progress nonetheless. And, if we are so inclined, we can at least take some comfort in knowing that earlier generations have left us a lot of room for improvement.

Footnote:

[1] Friedman in fact recognized the point in his writings, but he emphasized the dangers of allowing discretion in the choice of instruments rather than the time-inconsistency policy, because it was only former argument that provided a basis for preferring his quantity rule over price rules.

Richard Lipsey and the Phillips Curve Redux

Almost three and a half years ago, I published a post about Richard Lipsey’s paper “The Phillips Curve and the Tyranny of an Assumed Unique Macro Equilibrium.” The paper originally presented at the 2013 meeting of the History of Econmics Society has just been published in the Journal of the History of Economic Thought, with a slightly revised title “The Phillips Curve and an Assumed Unique Macroeconomic Equilibrium in Historical Context.” The abstract of the revised published version of the paper is different from the earlier abstract included in my 2013 post. Here is the new abstract.

An early post-WWII debate concerned the most desirable demand and inflationary pressures at which to run the economy. Context was provided by Keynesian theory devoid of a full employment equilibrium and containing its mainly forgotten, but still relevant, microeconomic underpinnings. A major input came with the estimates provided by the original Phillips curve. The debate seemed to be rendered obsolete by the curve’s expectations-augmented version with its natural rate of unemployment, and associated unique equilibrium GDP, as the only values consistent with stable inflation. The current behavior of economies with the successful inflation targeting is inconsistent with this natural-rate view, but is consistent with evolutionary theory in which economies have a wide range of GDP-compatible stable inflation. Now the early post-WWII debates are seen not to be as misguided as they appeared to be when economists came to accept the assumptions implicit in the expectations-augmented Phillips curve.

Publication of Lipsey’s article nicely coincides with Roger Farmer’s new book Prosperity for All which I discussed in my previous post. A key point that Roger makes is that the assumption of a unique equilibrium which underlies modern macroeconomics and the vertical long-run Phillips Curve is neither theoretically compelling nor consistent with the empirical evidence. Lipsey’s article powerfully reinforces those arguments. Access to Lipsey’s article is gated on the JHET website, so in addition to the abstract, I will quote the introduction and a couple of paragraphs from the conclusion.

One important early post-WWII debate, which took place particularly in the UK, concerned the demand and inflationary pressures at which it was best to run the economy. The context for this debate was provided by early Keynesian theory with its absence of a unique full-employment equilibrium and its mainly forgotten, but still relevant, microeconomic underpinnings. The original Phillips Curve was highly relevant to this debate. All this changed, however, with the introduction of the expectations-augmented version of the curve with its natural rate of unemployment, and associated unique equilibrium GDP, as the only values consistent with a stable inflation rate. This new view of the economy found easy acceptance partly because most economists seem to feel deeply in their guts — and their training predisposes them to do so — that the economy must have a unique equilibrium to which market forces inevitably propel it, even if the approach is sometimes, as some believe, painfully slow.

The current behavior of economies with successful inflation targeting is inconsistent with the existence of a unique non-accelerating-inflation rate of unemployment (NAIRU) but is consistent with evolutionary theory in which the economy is constantly evolving in the face of path-dependent, endogenously generated, technological change, and has a wide range of unemployment and GDP over which the inflation rate is stable. This view explains what otherwise seems mysterious in the recent experience of many economies and makes the early post-WWII debates not seem as silly as they appeared to be when economists came to accept the assumption of a perfectly inelastic, long-run Phillips curve located at the unique equilibrium level of unemployment. One thing that stands in the way of accepting this view, however, the tyranny of the generally accepted assumption of a unique, self-sustaining macroeconomic equilibrium.

This paper covers some of the key events in the theory concerning, and the experience of, the economy’s behavior with respect to inflation and unemployment over the post-WWII period. The stage is set by the pressure-of-demand debate in the 1950s and the place that the simple Phillips curve came to play in it. The action begins with the introduction of the expectations-augmented Phillips curve and the acceptance by most Keynesians of its implication of a unique, self-sustaining macro equilibrium. This view seemed not inconsistent with the facts of inflation and unemployment until the mid-1990s, when the successful adoption of inflation targeting made it inconsistent with the facts. An alternative view is proposed, on that is capable of explaining current macro behavior and reinstates the relevance of the early pressure-of-demand debate. (pp. 415-16).

In reviewing the evidence that stable inflation is consistent with a range of unemployment rates, Lipsey generalizes the concept of a unique NAIRU to a non-accelerating-inflation band of unemployment (NAIBU) within which multiple rates of unemployment are consistent with a basically stable expected rate of inflation. In an interesting footnote, Lipsey addresses a possible argument against the relevance of the empirical evidence for policy makers based on the Lucas critique.

Some might raise the Lucas critique here, arguing that one finds the NAIBU in the data because policymakers are credibly concerned only with inflation. As soon as policymakers made use of the NAIBU, the whole unemployment-inflation relation that has been seen since the mid-1990s might change or break. For example, unions, particularly in the European Union, where they are typically more powerful than in North America, might alter their behavior once they became aware that the central bank was actually targeting employment levels directly and appeared to have the power to do so. If so, the Bank would have to establish that its priorities were lexicographically ordered with control of inflation paramount so that any level-of-activity target would be quickly dropped whenever inflation threatened to go outside of the target bands. (pp. 426-27)

I would just mention in this context that in this 2013 post about the Lucas critique, I pointed out that in the paper in which Lucas articulated his critique, he assumed that the only possible source of disequilibrium was a mistake in expected inflation. If everything else is working well, causing inflation expectations to be incorrect will make things worse. But if there are other sources of disequilibrium, it is not clear that incorrect inflation expectations will make things worse; they could make things better. That is a point that Lipsey and Kelvin Lancaster taught the profession in a classic article “The General Theory of Second Best,” 20 years before Lucas published his critique of econometric policy evaluation.

I conclude by quoting Lipsey’s penultimate paragraph (the final paragraph being a quote from Lipsey’s paper on the Phillips Curve from the Blaug and Lloyd volume Famous Figures and Diagrams in Economics which I quoted in full in my 2013 post.

So we seem to have gone full circle from the early Keynesian view in which there was no unique level of GDP to which the economy was inevitably drawn, through a simple Phillips curve with its implied trade-0ff, to an expectations-augmented Phillips curve (or any of its more modern equivalents) with its associated unique level of GDP, and finally back to the early Keynesian view in which policymakers had an option as to the average pressure of aggregate demand at which economic activity could be sustained. However, the modern debated about whether to aim for [the high or low range of stable unemployment rates] is not a debate about inflation versus growth, as it was in the 1950s, but between those who would risk an occasional rise of inflation above the target band as the price of getting unemployment as low as possible and those who would risk letting unemployment fall below that indicated by the lower boundary of the NAIBU  as the price of never risking an acceleration of inflation above the target rate. (p. 427)

What’s Wrong with Monetarism?

UPDATE: (05/06): In an email Richard Lipsey has chided me for seeming to endorse the notion that 1970s stagflation refuted Keynesian economics. Lipsey rightly points out that by introducing inflation expectations into the Phillips Curve or the Aggregate Supply Curve, a standard Keynesian model is perfectly capable of explaining stagflation, so that it is simply wrong to suggest that 1970s stagflation constituted an empirical refutation of Keynesian theory. So my statement in the penultimate paragraph that the k-percent rule

was empirically demolished in the 1980s in a failure even more embarrassing than the stagflation failure of Keynesian economics.

should be amended to read “the supposed stagflation failure of Keynesian economics.”

Brad DeLong recently did a post (“The Disappearance of Monetarism”) referencing an old (apparently unpublished) paper of his following up his 2000 article (“The Triumph of Monetarism”) in the Journal of Economic Perspectives. Paul Krugman added his own gloss on DeLong on Friedman in a post called “Why Monetarism Failed.” In the JEP paper, DeLong argued that the New Keynesian policy consensus of the 1990s was built on the foundation of what DeLong called “classic monetarism,” the analytical core of the doctrine developed by Friedman in the 1950s and 1960s, a core that survived the demise of what he called “political monetarism,” the set of factual assumptions and policy preferences required to justify Friedman’s k-percent rule as the holy grail of monetary policy.

In his follow-up paper, DeLong balanced his enthusiasm for Friedman with a bow toward Keynes, noting the influence of Keynes on both classic and political monetarism, arguing that, unlike earlier adherents of the quantity theory, Friedman believed that a passive monetary policy was not the appropriate policy stance during the Great Depression; Friedman famously held the Fed responsible for the depth and duration of what he called the Great Contraction, because it had allowed the US money supply to drop by a third between 1929 and 1933. This was in sharp contrast to hard-core laissez-faire opponents of Fed policy, who regarded even the mild and largely ineffectual steps taken by the Fed – increasing the monetary base by 15% – as illegitimate interventionism to obstruct the salutary liquidation of bad investments, thereby postponing the necessary reallocation of real resources to more valuable uses. So, according to DeLong, Friedman, no less than Keynes, was battling against the hard-core laissez-faire opponents of any positive action to speed recovery from the Depression. While Keynes believed that in a deep depression only fiscal policy would be effective, Friedman believed that, even in a deep depression, monetary policy would be effective. But both agreed that there was no structural reason why stimulus would necessarily counterproductive; both rejected the idea that only if the increased output generated during the recovery was of a particular composition would recovery be sustainable.

Indeed, that’s why Friedman has always been regarded with suspicion by laissez-faire dogmatists who correctly judged him to be soft in his criticism of Keynesian doctrines, never having disputed the possibility that “artificially” increasing demand – either by government spending or by money creation — in a deep depression could lead to sustainable economic growth. From the point of view of laissez-faire dogmatists that concession to Keynesianism constituted a total sellout of fundamental free-market principles.

Friedman parried such attacks on the purity of his free-market dogmatism with a counterattack against his free-market dogmatist opponents, arguing that the gold standard to which they were attached so fervently was itself inconsistent with free-market principles, because, in virtually all historical instances of the gold standard, the monetary authorities charged with overseeing or administering the gold standard retained discretionary authority allowing them to set interest rates and exercise control over the quantity of money. Because monetary authorities retained substantial discretionary latitude under the gold standard, Friedman argued that a gold standard was institutionally inadequate and incapable of constraining the behavior of the monetary authorities responsible for its operation.

The point of a gold standard, in Friedman’s view, was that it makes it costly to increase the quantity of money. That might once have been true, but advances in banking technology eventually made it easy for banks to increase the quantity of money without any increase in the quantity of gold, making inflation possible even under a gold standard. True, eventually the inflation would have to be reversed to maintain the gold standard, but that simply made alternative periods of boom and bust inevitable. Thus, the gold standard, i.e., a mere obligation to convert banknotes or deposits into gold, was an inadequate constraint on the quantity of money, and an inadequate systemic assurance of stability.

In other words, if the point of a gold standard is to prevent the quantity of money from growing excessively, then, why not just eliminate the middleman, and simply establish a monetary rule constraining the growth in the quantity of money. That was why Friedman believed that his k-percent rule – please pardon the expression – trumped the gold standard, accomplishing directly what the gold standard could not accomplish, even indirectly: a gradual steady increase in the quantity of money that would prevent monetary-induced booms and busts.

Moreover, the k-percent rule made the monetary authority responsible for one thing, and one thing alone, imposing a rule on the monetary authority prescribing the time path of a targeted instrument – the quantity of money – over which the monetary authority has direct control: the quantity of money. The belief that the monetary authority in a modern banking system has direct control over the quantity of money was, of course, an obvious mistake. That the mistake could have persisted as long as it did was the result of the analytical distraction of the money multiplier: one of the leading fallacies of twentieth-century monetary thought, a fallacy that introductory textbooks unfortunately continue even now to foist upon unsuspecting students.

The money multiplier is not a structural supply-side variable, it is a reduced-form variable incorporating both supply-side and demand-side parameters, but Friedman and other Monetarists insisted on treating it as if it were a structural — and a deep structural variable at that – supply variable, so that it no less vulnerable to the Lucas Critique than, say, the Phillips Curve. Nevertheless, for at least a decade and a half after his refutation of the structural Phillips Curve, demonstrating its dangers as a guide to policy making, Friedman continued treating the money multiplier as if it were a deep structural variable, leading to the Monetarist forecasting debacle of the 1980s when Friedman and his acolytes were confidently predicting – over and over again — the return of double-digit inflation because the quantity of money was increasing for most of the 1980s at double-digit rates.

So once the k-percent rule collapsed under an avalanche of contradictory evidence, the Monetarist alternative to the gold standard that Friedman had persuasively, though fallaciously, argued was, on strictly libertarian grounds, preferable to the gold standard, the gold standard once again became the default position of laissez-faire dogmatists. There was to be sure some consideration given to free banking as an alternative to the gold standard. In his old age, after winning the Nobel Prize, F. A. Hayek introduced a proposal for direct currency competition — the elimination of legal tender laws and the like – which he later developed into a proposal for the denationalization of money. Hayek’s proposals suggested that convertibility into a real commodity was not necessary for a non-legal tender currency to have value – a proposition which I have argued is fallacious. So Hayek can be regarded as the grandfather of crypto currencies like the bitcoin. On the other hand, advocates of free banking, with a few exceptions like Earl Thompson and me, have generally gravitated back to the gold standard.

So while I agree with DeLong and Krugman (and for that matter with his many laissez-faire dogmatist critics) that Friedman had Keynesian inclinations which, depending on his audience, he sometimes emphasized, and sometimes suppressed, the most important reason that he was unable to retain his hold on right-wing monetary-economics thinking is that his key monetary-policy proposal – the k-percent rule – was empirically demolished in a failure even more embarrassing than the stagflation failure of Keynesian economics. With the k-percent rule no longer available as an alternative, what’s a right-wing ideologue to do?

Anyone for nominal gross domestic product level targeting (or NGDPLT for short)?

Making Sense of the Phillips Curve

In a comment on my previous post about supposedly vertical long run Phillips Curve, Richard Lipsey mentioned a paper he presented a couple of years ago at the History of Economics Society Meeting: “The Phillips Curve and the Tyranny of an Assumed Unique Macro Equilibrium.” In a subsequent comment, Richard also posted the abstract to his paper. The paper provides a succinct yet fascinating overview of the evolution macroeconomists’ interpretations of the Phillips curve since Phillips published his paper almost 60 years ago.

The two key points that I take away from Richard’s discussion are the following. 1) A key microeconomic assumption underlying the Keynesian model is that over a broad range of outputs, most firms are operating under conditions of constant short-run marginal cost, because in the short run firms keep the capital labor ratio fixed, varying their usage of capital along with the amount of labor utilized. With a fixed capital-labor ration, marginal cost is flat. In the usual textbook version, the short-run marginal cost is rising because of a declining capital-labor ratio, requiring an increasing number of workers to wring out successive equal increments of output from a fixed amount of capital. Given flat marginal cost, firms respond to changes in demand by varying output but not price until they hit a capacity bottleneck.

The second point, a straightforward implication of the first, is that there are multiple equilibria for such an economy, each equilibrium corresponding to a different level of total demand, with a price level more or less determined by costs, at any rate until total output approaches the limits of its capacity.

Thus, early on, the Phillips Curve was thought to be relatively flat, with little effect on inflation unless unemployment was forced down below some very low level. The key question was how far unemployment could be pushed down before significant inflationary pressure would begin to emerge. Doctrinaire Keynesians advocated driving unemployment down as low as possible, while skeptics argued that significant inflationary pressure would begin to emerge even at higher rates of unemployment, so that a prudent policy would be to operate at a level of unemployment sufficiently high to keep inflationary pressures in check.

Lipsey allows that, in the 1960s, the view that the Phillips Curve presented a menu of alternative combinations of unemployment and inflation from which policymakers could choose did take hold, acknowledging that he himself expressed such a view in a 1965 paper (“Structural and Deficient Demand Unemployment Reconsidered” in Employment Policy and the Labor Market edited by Arthur Ross), “inflationary points on the Phillips Curve represent[ing] disequilibrium points that had to be maintained by monetary policy that perpetuated the disequilibrium by suitable increases in the rate of monetary expansion.” It was this version of the Phillips Curve that was effectively attacked by Friedman and Phelps, who replaced it with a version in which the equilibrium rate of unemployment is uniquely determined by real factors, the natural rate of unemployment, any deviation from the natural rate resulting in a series of adjustments in inflation and expected inflation that would restore the natural rate of unemployment.

Sometime in the 1960s the Phillips curve came to be thought of as providing a stable trade-off between inflation and unemployment. When Lipsey did adopt this trade-off version, as for example Lipsey (1965), inflationary points on the Phillips curve represented disequilibrium points that had to be maintained by monetary policy that perpetuated the disequilibrium by suitable increases in the rate of monetary expansion. In the new Classical interpretation that began with Edmund Phelps (1967), Milton Friedman (1968) and Lucas and Rapping (1969), each point was an equilibrium point because demands and supplies of agents were shifted from their full-information locations when they misinterpreted the price signals. There was, however, only one full-information equilibrium of income, Y*, and unemployment, U*.

The Friedman-Phelps argument was made as inflation rose significantly in the late 1960s, and the mild 1969-70 recession reduce inflation by only a smidgen, setting the stage for Nixon’s imposition of his disastrous wage and price controls in 1971 combined with a loosening of monetary policy by a compliant Arthur Burns as part of Nixon’s 1972 reelection strategy. When the hangover to the 1972 monetary binge was combined with a quadrupling of oil prices by OPEC in late 1973, the result was a simultaneous increase in inflation and unemployment – stagflation — a combination widely perceived as a decisive refutation of Keynesian theory. To cope with that theoretical conundrum, the Keynesian model was expanded to incorporate the determination of the price level by deriving an aggregate supply and aggregate demand curve in price-level/output space.

Lipsey acknowledges a crucial misstep in constructing the Aggregate Demand/Aggregate Supply framework: assuming a unique macroeconomic equilibrium, an assumption that implied the existence of a unique natural rate of unemployment. Keynesians won the battle, providing a perfectly respectable theoretical explanation for stagflation, but, in doing so, they lost the war to Friedman, paving the way for the malign ascendancy of New Classical economics, with which New Keynesian economics became an effective collaborator. Whether the collaboration was willing or unwilling is unclear and unimportant; by assuming a unique equilibrium, New Keynesians gave up the game.

I was so intent in showing that this AD-AS construction provided a simple Keynesian explanation of stagflation, contrary to the accusation of the New Classical economists that stagflation provided a conclusive refutation of Keynesian economics that I paid too little attention to the enormous importance of the new assumption introduced into Keynesian models. The addition of an expectations-augmented Philips curve, negatively sloped in the short run but vertical in the long run, produced a unique macro equilibrium that would be reached whatever macroeconomic policy was adopted.

Lipsey does not want to go back to the old Keynesian paradigm; he prefers a third approach that can be traced back to, among others, Joseph Schumpeter in which the economy is viewed “as constantly evolving under the impact of endogenously generated technological change.” Such technological change can be vaguely foreseen, but also gives rise to genuine surprises. The course of economic development is not predetermined, but path-dependent. History matters.

I suggest that the explanation of the current behaviour of inflation, output and unemployment in modern industrial economies is provided not by any EWD [equilibrium with deviations] theory but by evolutionary theories. These build on the obvious observation that technological change is continual in modern economies (decade by decade at least since 1760), but uneven (tending to come in spurts), and path dependent (because, among other reasons, knowledge is cumulative with one advance enabling another). These changes are generated endogenously by private-sector, profit-seeking agents competing in terms of new products, new processes and new forms of organisation, and by public sector activities in such places as universities and government research laboratories. They continually alter the structure of the economy, causing waves of serially correlated investment expenditure that are a major cause of cycles, as well as driving the long-term growth that continually transforms our economic, social and political structures. In their important book As Time Goes By, Freeman and Louça (2001) trace these processes as they have operated since the beginnings of the First Industrial Revolution.

A critical distinction in all such theories is between risk, which is easily handled in neoclassical economics, and uncertainty, which is largely ignored in it except to pay it lip service. In risky situations, agents with the same objective function and identical knowledge will chose the same alternative: the one that maximizes the expected value of their profits or utility. This gives rise to unique predictable behaviour of agents acting under specified conditions. In contrast in uncertain situations, two identically situated and motivated agents can, and observably do, choose different alternatives — as for example when different firms all looking for the same technological breakthrough chose different lines of R&D — and there is no way to tell in advance of knowing the results which is the better choice. Importantly, agents typically make R&D decisions under conditions of genuine uncertainty. No one knows if a direction of technological investigation will go up a blind alley or open onto a rich field of applications until funds are spend investigating the route. Sometimes trivial expenses produce results of great value while major expenses produce nothing of value. Since there is no way to decide in advance which of two alternative actions with respect to invention or innovation is the best one until the results are known, there is no unique line of behaviour that maximises agents’ expected profits. Thus agents are better understood as groping into an uncertain future in a purposeful, profit- or utility-seeking manner, rather than as maximizing their profits or utility.

This is certainly the right way to think about how economies evolve over time, but I would just add that even if one stays within the more restricted framework of Walrasian general equilibrium, there is simply no persuasive theoretical reason to assume that there is a unique equilibrium or that an economy will necessarily arrive at that equilibrium no matter how long we wait. I have discussed this point several times before most recently here. The assumption that there is a natural rate of unemployment “ground out,” as Milton Friedman put it so awkwardly, “by the Walrasian system of general equilibrium equations” simply lacks any theoretical foundation. Even in a static model in which knowledge and technology were not evolving, the natural rate of unemployment is a will o the wisp.

Because there is no unique static equilibrium in the evolutionary world in which history matters, no adjustment mechanism is required to maintain it. Instead, the constantly changing economy can exist over a wide range of income, employment and unemployment values, without behaving as it would if its inflation rate were determined by an expectations-augmented Phillips curve or any similar construct centred on unique general equilibrium values of Y and U. Thus there is no stable long-run vertical Phillips curve or aggregate supply curve.

Instead of the Phillips curve there is a band as shown in Figure 4 [See below]. Its midpoint is at the expected rate of inflation. If the central bank has a credible inflation target that it sticks to, the expected rate will be that target rate, shown as πe in the figure. The actual rate will vary around the expected rate depending on a number of influences such as changes in productivity, the price of oil and food, but not significantly on variations in U or Y. At either end of this band, there may be something closer to a conventional Phillips curve with prices and wages falling in the face of a major depression and rising in the face of a major boom financed by monetary expansion. Also, the whole band will be shifted by anything that changes the expected rate of inflation.

phillips_lipsey

Lipsey concludes as follows:

So we seem to have gone full circle from early Keynesian view in which there was no unique level of income to which the economy was inevitably drawn, through a simple Phillips curve with its implied trade off, to an expectations-augmented Phillips curve (or any of its more modern equivalents) with its associated unique level of national income, and finally back to the early non-unique Keynesian view in which policy makers had an option as to the average pressure of aggregate demand at which the economy could be operated.

“Perhaps [then] Keynesians were too hasty in following the New Classical economists in accepting the view that follows from static [and all EWD] models that stable rates of wage and price inflation are poised on the razor’s edge of a unique NAIRU and its accompanying Y*. The alternative does not require a long term Phillips curve trade off, nor does it deny the possibility of accelerating inflations of the kind that have bedevilled many third world countries. It is merely states that industrialised economies with low expected inflation rates may be less precisely responsive than current theory assumes because they are subject to many lags and inertias, and are operating in an ever-changing and uncertain world of endogenous technological change, which has no unique long term static equilibrium. If so, the economy may not be similar to the smoothly functioning mechanical world of Newtonian mechanics but rather to the imperfectly evolving world of evolutionary biology. The Phillips relation then changes from being a precise curve to being a band within which various combinations of inflation and unemployment are possible but outside of which inflation tends to accelerate or decelerate. Perhaps then the great [pre-Phillips curve] debates of the 1940s and early 1950s that assumed that there was a range within which the economy could be run with varying pressures of demand, and varying amounts of unemployment and inflation[ary pressure], were not as silly as they were made to seem when both Keynesian and New Classical economists accepted the assumption of a perfectly inelastic, one-dimensional, long run Phillips curve located at a unique equilibrium Y* and NAIRU.” (Lipsey, “The Phillips Curve,” In Famous Figures and Diagrams in Economics, edited by Mark Blaug and Peter Lloyd, p. 389)

The Near Irrelevance of the Vertical Long-Run Phillips Curve

From a discussion about how much credit Milton Friedman deserves for changing the way that economists thought about inflation, I want to nudge the conversation in a slightly different direction, to restate a point that I made some time ago in one of my favorite posts (The Lucas Critique Revisited). But if Friedman taught us anything it is that incessant repetition of the same already obvious point can do wonders for your reputation. That’s one lesson from Milton that I am willing to take to heart, though my tolerance for hearing myself say the same darn thing over and over again is probably not as great as Friedman’s was, which to be sure is not the only way in which I fall short of him by comparison. (I am almost a foot taller than he was by the way). Speaking of being a foot taller than Friedman, I don’t usually post pictures on this blog, but here is one that I have always found rather touching. And if you don’t know who the other guy is in the picture, you have no right to call yourself an economist.

friedman_&_StiglerAt any rate, the expectations augmented, long-run Phillips Curve, as we all know, was shown by Friedman to be vertical. But what exactly does it mean for the expectations-augmented, long-run Phillips Curve to be vertical? Discussions about whether the evidence supports the proposition that the expectations-augmented, long-run Phillips Curve is vertical (including some of the comments on my recent posts) suggest that people are not clear on what “long-run” means in the context of the expectations-augmented Phillips Curve and have not really thought carefully about what empirical content is contained by the proposition that the expectations-augmented, long-run Phillips Curve is vertical.

Just to frame the discussion of the Phillips Curve, let’s talk about what the term “long-run” means in economics. What it certainly does not mean is an amount of calendar time, though I won’t deny that there are frequent attempts to correlate long-run with varying durations of calendar time. But all such attempts either completely misunderstand what the long-run actually represents, or they merely aim to provide the untutored with some illusion of concreteness in what is otherwise a completely abstract discussion. In fact, what “long run” connotes is simply a full transition from one equilibrium state to another in the context of a comparative-statics exercise.

If a change in some exogenous parameter is imposed on a pre-existing equilibrium, then the long-run represents the full transition to a new equilibrium in which all endogenous variables have fully adjusted to the parameter change. The short-run, then, refers to some intermediate adjustment to the parameter change in which some endogenous variables have been arbitrarily held fixed (presumably because of some possibly reasonable assumption that some variables are able to adjust more speedily than other variables to the posited parameter change).

Now the Phillips Curve that was discovered by A. W. Phillips in his original paper was a strictly empirical relation between observed (wage) inflation and observed unemployment. But the expectations-augmented long-run Phillips Curve is a theoretical construct. And what it represents is certainly not an observable relationship between inflation and unemployment; it rather is a locus of points of equilibrium, each point representing full adjustment of the labor market to a particular rate of inflation, where full adjustment means that the rate of inflation is fully anticipated by all economic agents in the model. So what the expectations-augmented, long-run Phillips Curve is telling us is that if we perform a series of comparative-statics exercises in which, starting from full equilibrium with the given rate of inflation fully expected, we impose on the system a parameter change in which the exogenously imposed rate of inflation is changed and deduce a new equilibrium in which the fully and universally expected rate of inflation equals the alternative exogenously imposed inflation parameter, the equilibrium rate of unemployment corresponding to the new inflation parameter will not differ from the equilibrium rate of unemployment corresponding to the original inflation parameter.

Notice, as well, that the expectations-augmented, long-run Phillips Curve is not saying that imposing a new rate of inflation on an actual economic system would lead to a new equilibrium in which there was no change in unemployment; it is merely comparing alternative equilibria of the same system with different exogenously imposed rates of inflation. To make a statement about the effect of a change in the rate of inflation on unemployment, one has to be able to specify an adjustment path in moving from one equilibrium to another. The comparative-statics method says nothing about the adjustment path; it simply compares two alternative equilibrium states and specifies the change in endogenous variable induced by the change in an exogenous parameter.

So the vertical shape of the expectations-augmented, long-run Phillips Curve tells us very little about how, in any given situation, a change in the rate of inflation would actually affect the rate of unemployment. Not only does the expectations-augmented long-run Phillips Curve fail to tell us how a real system starting from equilibrium would be affected by a change in the rate of inflation, the underlying comparative-statics exercise being unable to specify the adjustment path taken by a system once it departs from its original equilibrium state, the expectations augmented, long-run Phillips Curve is even less equipped to tell us about the adjustment to a change in the rate of inflation when a system is not even in equilibrium to begin with.

The entire discourse of the expectations-augmented, long-run Phillips Curve is completely divorced from the kinds of questions that policy makers in the real world usually have to struggle with – questions like will increasing the rate of inflation of an economy in which there is abnormally high unemployment facilitate or obstruct the adjustment process that takes the economy back to a more normal unemployment rate. The expectations-augmented, long-run Phillips Curve may not be completely irrelevant to the making of economic policy – it is good to know, for example, that if we are trying to figure out which time path of NGDP to aim for, there is no particular reason to think that a time path with a 10% rate of growth of NGDP would probably not generate a significantly lower rate of unemployment than a time path with a 5% rate of growth – but its relationship to reality is sufficiently tenuous that it is irrelevant to any discussion of policy alternatives for economies unless those economies are already close to being in equilibrium.

Did David Hume Discover the Vertical Phillips Curve?

In my previous post about Nick Rowe and Milton Friedman, I pointed out to Nick Rowe that Friedman (and Phelps) did not discover the argument that the long-run Phillips Curve, defined so that every rate of inflation is correctly expected, is vertical. The argument I suggested can be traced back at least to Hume. My claim on Hume’s behalf was based on my vague recollection that Hume distinguished between the effect of a high price level and a rising price level, a high price level having no effect on output and employment, while a rising price level increases output and employment.

Scott Sumner offered the following comment, leaving it as an exercise for the reader to figure out what he meant by “didn’t quite get there.”:

As you know Friedman is one of the few areas where we disagree. Here I’ll just address one point, the expectations augmented Phillips Curve. Although I love Hume, he didn’t quite get there, although he did discuss the simple Phillips Curve.

I wrote the following response to Scott referring to the quote that I was thinking of without quoting it verbatim (because I couldn’t remember where to find it):

There is a wonderful quote by Hume about how low prices or high prices are irrelevant to total output, profits and employment, but that unexpected increases in prices are a stimulus to profits, output, and employment. I’ll look for it, and post it.

Nick Rowe then obligingly provided the quotation I was thinking of (but not all of it):

Here, to my mind, is the “money quote” (pun not originally intended) from David Hume’s “Of Money”:

“From the whole of this reasoning we may conclude, that it is of no manner of consequence, with regard to the domestic happiness of a state, whether money be in a greater or less quantity. The good policy of the magistrate consists only in keeping it, if possible, still encreasing; because, by that means, he keeps alive a spirit of industry in the nation, and encreases the stock of labour, in which consists all real power and riches.”

The first sentence is fine. But the second sentence is very clearly a problem.

Was it Friedman who said “we have only advanced one derivative since Hume”?

OK, so let’s see the whole relevant quotation from Hume’s essay “Of Money.”

Accordingly we find, that, in every kingdom, into which money begins to flow in greater abundance than formerly, everything takes a new face: labour and industry gain life; the merchant becomes more enterprising, the manufacturer more diligent and skilful, and even the farmer follows his plough with greater alacrity and attention. This is not easily to be accounted for, if we consider only the influence which a greater abundance of coin has in the kingdom itself, by heightening the price of Commodities, and obliging everyone to pay a greater number of these little yellow or white pieces for everything he purchases. And as to foreign trade, it appears, that great plenty of money is rather disadvantageous, by raising the price of every kind of labour.

To account, then, for this phenomenon, we must consider, that though the high price of commodities be a necessary consequence of the encrease of gold and silver, yet it follows not immediately upon that encrease; but some time is required before the money circulates through the whole state, and makes its effect be felt on all ranks of people. At first, no alteration is perceived; by degrees the price rises, first of one commodity, then of another; till the whole at last reaches a just proportion with the new quantity of specie which is in the kingdom. In my opinion, it is only in this interval or intermediate situation, between the acquisition of money and rise of prices, that the encreasing quantity of gold and silver is favourable to industry. When any quantity of money is imported into a nation, it is not at first dispersed into many hands; but is confined to the coffers of a few persons, who immediately seek to employ it to advantage. Here are a set of manufacturers or merchants, we shall suppose, who have received returns of gold and silver for goods which they sent to CADIZ. They are thereby enabled to employ more workmen than formerly, who never dream of demanding higher wages, but are glad of employment from such good paymasters. If workmen become scarce, the manufacturer gives higher wages, but at first requires an encrease of labour; and this is willingly submitted to by the artisan, who can now eat and drink better, to compensate his additional toil and fatigue.

He carries his money to market, where he, finds everything at the same price as formerly, but returns with greater quantity and of better kinds, for the use of his family. The farmer and gardener, finding, that all their commodities are taken off, apply themselves with alacrity to the raising more; and at the same time can afford to take better and more cloths from their tradesmen, whose price is the same as formerly, and their industry only whetted by so much new gain. It is easy to trace the money in its progress through the whole commonwealth; where we shall find, that it must first quicken the diligence of every individual, before it encrease the price of labour. And that the specie may encrease to a considerable pitch, before it have this latter effect, appears, amongst other instances, from the frequent operations of the FRENCH king on the money; where it was always found, that the augmenting of the numerary value did not produce a proportional rise of the prices, at least for some time. In the last year of LOUIS XIV, money was raised three-sevenths, but prices augmented only one. Corn in FRANCE is now sold at the same price, or for the same number of livres, it was in 1683; though silver was then at 30 livres the mark, and is now at 50. Not to mention the great addition of gold and silver, which may have come into that kingdom since the former period.

From the whole of this reasoning we may conclude, that it is of no manner of consequence, with regard to the domestic happiness of a state, whether money be in a greater or less quantity. The good policy of the magistrate consists only in keeping it, if possible, still encreasing; because, by that means, he keeps alive a spirit of industry in the nation, and encreases the stock of labour, in which consists all real power and riches. A nation, whose money decreases, is actually, at that time, weaker and more miserable than another nation, which possesses no more money, but is on the encreasing hand. This will be easily accounted for, if we consider, that the alterations in the quantity of money, either on one side or the other, are not immediately attended with proportionable alterations in the price of commodities. There is always an interval before matters be adjusted to their new situation; and this interval is as pernicious to industry, when gold and silver are diminishing, as it is advantageous when these metals are encreasing. The workman has not the same employment from the manufacturer and merchant; though he pays the same price for everything in the market. The farmer cannot dispose of his corn and cattle; though he must pay the same rent to his landlord. The poverty, and beggary, and sloth, which must ensue, are easily foreseen.

So Hume understands that once-and-for-all increases in the stock of money and in the price level are neutral, and also that in the transition from one price level to another, there will be a transitory effect on output and employment. However, when he says that the good policy of the magistrate consists only in keeping it, if possible, still increasing; because, by that means, he keeps alive a spirit of industry in the nation, he seems to be suggesting that the long-run Phillips Curve is actually positively sloped, thus confirming Milton Friedman (and Nick Rowe and Scott Sumner) in saying that Hume was off by one derivative.

While I think that is a fair reading of Hume, it is not the only one, because Hume really was thinking in terms of price levels, not rates of inflation. The idea that a good magistrate would keep the stock of money increasing could not have meant that the rate of inflation would indefinitely continue at a particular rate, only that the temporary increase in the price level would be extended a while longer. So I don’t think that Hume would ever have imagined that there could be a steady predicted rate of inflation lasting for an indefinite period of time. If he could have imagined a steady rate of inflation, I think he would have understood the simple argument that, once expected, the steady rate of inflation would not permanently increase output and employment.

At any rate, even if Hume did not explicitly anticipate Friedman’s argument for a vertical long-run Phillips Curve, certainly there many economists before Friedman who did. I will quote just one example from a source (Hayek’s Constitution of Liberty) that predates Friedman by about eight years. There is every reason to think that Friedman was familiar with the source, Hayek having been Friedman’s colleague at the University of Chicago between 1950 and 1962. The following excerpt is from p. 331 of the 1960 edition.

Inflation at first merely produces conditions in which more people make profits and in which profits are generally larger than usual. Almost everything succeeds, there are hardly any failures. The fact that profits again and again prove to be greater than had been expected and that an unusual number of ventures turn out to be successful produces a general atmosphere favorable to risk-taking. Even those who would have been driven out of business without the windfalls caused by the unexpected general rise in prices are able to hold on and to keep their employees in the expectation that they will soon share in the general prosperity. This situation will last, however, only until people begin to expect prices to continue to rise at the same rate. Once they begin to count on prices being so many per cent higher in so many months’ time, they will bid up the prices of the factors of production which determine the costs to a level corresponding to the future prices they expect. If prices then rise no more than had been expected, profits will return to normal, and the proportion of those making a profit also will fall; and since, during the period of exceptionally large profits, many have held on who would otherwise have been forced to change the direction of their efforts, a higher proportion than usual will suffer losses.

The stimulating effect of inflation will thus operate only so long as it has not been foreseen; as soon as it comes to be foreseen, only its continuation at an increased rate will maintain the same degree of prosperity. If in such a situation price rose less than expected, the effect would be the same as that of unforeseen deflation. Even if they rose only as much as was generally expected, this would no longer provide the expectational stimulus but would lay bare the whole backlog of adjustments that had been postponed while the temporary stimulus lasted. In order for inflation to retain its initial stimulating effect, it would have to continue at a rate always faster than expected.

This was certainly not the first time that Hayek made the same argument. See his Studies in Philosophy Politics and Economics, p. 295-96 for a 1958 version of the argument. Is there any part of Friedman’s argument in his 1968 essay (“The Role of Monetary Policy“) not contained in the quote from Hayek? Nor is there anything to indicate that Hayek thought he was making an argument that was not already familiar. The logic is so obvious that it is actually pointless to look for someone who “discovered” it. If Friedman somehow gets credit for making the discovery, it is simply because he was the one who made the argument at just the moment when the rest of the profession happened to be paying attention.

Richard Lipsey and the Phillips Curve

Richard Lipsey has had an extraordinarily long and productive career as both an economic theorist and an empirical economist, making numerous important contributions in almost all branches of economics. (See, for example, the citation about Lipsey as a fellow of the Canadian Economics Association.) In addition, his many textbooks have been enormously influential in advocating that economists should strive to make their discipline empirically relevant by actually subjecting their theories to meaningful empirical tests in which refutation is a realistic possibility not just a sign that the researcher was insufficiently creative in theorizing or in performing the data analysis.

One of Lipsey’s most important early contributions was his 1960 paper on the Phillips Curve “The Relationship between Unemployment and the Rate of Change of Money Wages in the United Kingdom 1862-1957: A Further Analysis” in which he extended W A. Phillips’s original results, and he has continued to write about the Phillips Curve ever since. Lipsey, in line with his empiricist philosophical position, has consistently argued that a well-supported empirical relationship should not be dismissed simply because of a purely theoretical argument about how expectations are formed. In other words, the argument that adjustments in inflation expectations would cause the short-run Phillips curve relation captured by empirical estimates of the relationship between inflation and unemployment may well be valid (as was actually recognized early on by Samuelson and Solow in their famous paper suggesting that the Phillips Curve could be interpreted as a menu of alternative combinations of inflation and unemployment from which policy-makers could choose) in some general qualitative sense. But that does not mean that it had to be accepted as an undisputable axiom of economics that the long-run relationship between unemployment and inflation is necessarily vertical, as Friedman and Phelps and Lucas convinced most of the economics profession in the late 1960s and early 1970s.

A few months ago, Lipsey was kind enough to send me a draft of the paper that he presented at the annual meeting of the History of Economics Society; the paper is called “The Phillips Curve and the Tyranny of an Assumed Unique Macro Equilibrium.” Here is the abstract of the paper.

To make the argument that the behaviour of modern industrial economies since the 1990s is inconsistent with theories in which there is a unique ergodic macro equilibrium, the paper starts by reviewing both the early Keynesian theory in which there was no unique level of income to which the economy was inevitably drawn and the debate about the amount of demand pressure at which it was best of maintain the economy: high aggregate demand and some inflationary pressure or lower aggregate demand and a stable price level. It then covers the rise of the simple Phillips curve and its expectations-augmented version, which introduced into current macro theory a natural rate of unemployment (and its associated equilibrium level of national income). This rate was also a NAIRU, the only rate consistent with stable inflation. It is then argued that the current behaviour of many modern economies in which there is a credible policy to maintain a low and steady inflation rate is inconsistent with the existence of either a unique natural rate or a NAIRU but is consistent with evolutionary theory in which there is perpetual change driven by endogenous technological advance. Instead of a NAIRU evolutionary economies have a non-inflationary band of unemployment (a NAIBU) indicating a range of unemployment and income over with the inflation rate is stable. The paper concludes with the observation that the great pre-Phillips curve debates of the 1950s that assumed that there was a range within which the economy could be run with varying pressures of demand, and varying amounts of unemployment and inflationary pressure, were not as silly as they were made to seem when both Keynesian and New Classical economists accepted the assumption of a perfectly inelastic, long-run Phillips curve located at the unique equilibrium level of unemployment.

Back in January, I wrote a post about the Lucas Critique in which I pointed out that his “proof” that the Phillips Curve is vertical in his celebrated paper on econometric policy evaluation was no proof at all, but simply a very special example in which the only disequilibrium permitted in the model – a misperception of the future price level – would lead an econometrician to estimate a negatively sloped relation between inflation and employment even though under correct expectations of inflation the relationship would be vertical. Allowing for a wider range of behavioral responses, I suggested, might well change the relation between inflation and output even under correctly expected inflation. In his new paper, Lipsey correctly points out that Friedman and Phelps and Lucas, and subsequent New Classical and New Keynesian theoreticians, who have embraced the vertical Phillips Curve doctrine as an article of faith, are also assuming, based on essentially no evidence, that there is a unique macro equilibrium. But, there is very strong evidence to suggest that, in fact, any deviation from an initial equilibrium (or equilibrium time path) is likely to cause changes that, in and of themselves, cause a change in conditions that will propel the system toward a new and different equilibrium time path, rather than return to the time path the system had been moving along before it was disturbed. See my post of almost a year ago about a paper, “Does history matter?: Empirical analysis of evolutionary versus stationary equilibrium views of the economy,” by Carlaw and Lipsey.)

Lipsey concludes his paper with a quotation from his article “The Phillips Curve” published in the volume Famous Figures and Diagrams in Economics edited by Mark Blaug and Peter Lloyd.

Perhaps [then] Keynesians were too hasty in following the New Classical economists in accepting the view that follows from static [and all EWD] models that stable rates of wage and price inflation are poised on the razor’s edge of a unique NAIRU and its accompanying Y*. The alternative does not require a long term Phillips curve trade off, nor does it deny the possibility of accelerating inflations of the kind that have bedevilled many third world countries. It is merely states that industrialised economies with low expected inflation rates may be less precisely responsive than current theory assumes because they are subject to many lags and inertias, and are operating in an ever-changing and uncertain world of endogenous technological change, which has no unique long term static equilibrium. If so, the economy may not be similar to the smoothly functioning mechanical world of Newtonian mechanics but rather to the imperfectly evolving world of evolutionary biology. The Phillips relation then changes from being a precise curve to being a band within which various combinations of inflation and unemployment are possible but outside of which inflation tends to accelerate or decelerate. Perhaps then the great [pre-Phillips curve] debates of the 1940s and early 1950s that assumed that there was a range within which the economy could be run with varying pressures of demand, and varying amounts of unemployment and inflation[ary pressure], were not as silly as they were made to seem when both Keynesian and New Classical economists accepted the assumption of a perfectly inelastic, one-dimensional, long run Phillips curve located at a unique equilibrium Y* and NAIRU.”

Charles Goodhart on Nominal GDP Targeting

Charles Goodhart might just be the best all-around monetary economist in the world, having made impressive contributions to both monetary theory and the history of monetary theory, to monetary history, and the history of monetary institutions, especially of central banking, and to the theory and, in his capacity as chief economist of the Bank of England, practice of monetary policy. So whenever Goodhart offers his views on monetary policy, it is a good idea to pay close attention to what he says. But if there is anything to be learned from the history of economics (and I daresay the history of any scientific discipline), it is that nobody ever gets it right all the time. It’s nice to have a reputation, but sadly reputation provides no protection from error.

In response to the recent buzz about targeting nominal GDP, Goodhart, emeritus professor at the London School of Economics and an adviser to Morgan Stanley along with two Morgan Stanley economists, Jonathan Ashworth and Melanie Baker, just published a critique of a recent speech by Mark Carney, Governor-elect of the Bank of England, in which Carney seemed to endorse targeting the level of nominal GDP (hereinafter NGDPLT). (See also Marcus Nunes’s excellent post about Goodhart et al.) Goodhart et al. have two basic complaints about NGDPLT. The first one is that our choice of an initial target level (i.e., do we think that current NGDP is now at its target or away from its target and if so by how much) and of the prescribed growth in the target level over time would itself create destabilizing uncertainty in the process of changing to an NGDPLT monetary regime. The key distinction between a level target and a growth-rate target is that the former requires a subsequent compensatory adjustment for any deviation from the target while the latter requires no such adjustment for a deviation from the target. Because deviations will occur under any targeting regime, Goodhart et al. worry that the compensatory adjustments required by NGDPLT could trigger destabilizing gyrations in NGDP growth, especially if expectations, as they think likely, became unanchored.

This concern seems easily enough handled if the monetary authority is given say a 1-1.5% band around its actual target within which to operate. Inevitable variations around the target would not automatically require an immediate rapid compensatory adjustment. As long as the monetary authority remained tolerably close to its target, it would not be compelled to make a sharp policy adjustment. A good driver does not always drive down the middle of his side of the road, the driver uses all the space available to avoid having to make an abrupt changes in the direction in which the car is headed. The same principle would govern the decisions of a skillful monetary authority.

Another concern of Goodhart et al. is that the choice of the target growth rate of NGDP depends on how much real growth,we think the economy is capable of. If real growth of 3% a year is possible, then the corresponding NGDP level target depends on how much inflation policy makers believe necessary to achieve that real GDP growth rate. If the “correct” rate of inflation is about 2%, then the targeted level of NGDP should grow at 5% a year. But Goodhart et al. are worried that achievable growth may be declining. If so, NGDPLT at 5% a year will imply more than 2% annual inflation.

Effectively, any overestimation of the sustainable real rate of growth, and such overestimation is all too likely, could force an MPC [monetary policy committee], subject to a level nominal GDP target, to soon have to aim for a significantly higher rate of inflation. Is that really what is now wanted? Bring back the stagflation of the 1970s; all is forgiven?

With all due respect, I find this concern greatly overblown. Even if the expectation of 3% real growth is wildly optimistic, say 2% too high, a 5% NGDP growth path would imply only 4% inflation. That might be too high a rate for Goodhart’s taste, or mine for that matter, but it would be a far cry from the 1970s, when inflation was often in the double-digits. Paul Volcker achieved legendary status in the annals of central banking by bringing the US rate of inflation down to 3.5 to 4%, so one needs to maintain some sense of proportion in these discussions.

Finally, Goodhart et al. invoke the Phillips Curve.

[A]n NGDP target would appear to run counter to the previously accepted tenets of monetary theory. Perhaps the main claim of monetary economics, as persistently argued by Friedman, and the main reason for having an independent Central Bank, is that over the medium and longer term monetary forces influence only monetary variables. Other real (e.g. supply-side) factors determine growth; the long-run Phillips curve is vertical. Do those advocating a nominal GDP target now deny that? Do they really believe that faster inflation now will generate a faster, sustainable, medium- and longer-term growth rate?

While it is certainly undeniable that Friedman showed, as, in truth, many others had before him, that, for an economy in approximate full-employment equilibrium, increased inflation cannot permanently reduce unemployment, it is far from obvious (to indulge in bit of British understatement) that we are now in a state of full-employment equilibrium. If the economy is not now in full-employment equilibrium, the idea that monetary-neutrality propositions about money influencing only monetary, but not real, variables in the medium and longer term are of no relevance to policy. Those advocating a nominal GDP target need not deny that the long-run Phillips Curve is vertical, though, as I have argued previously (here, here, and here) the proposition that the long-run Phillips Curve is vertical is very far from being the natural law that Goodhart and many others seem to regard it as. And if Goodhart et al. believe that we in fact are in a state of full-employment equilibrium, then they ought to say so forthrightly, and they ought to make an argument to justify that far from obvious characterization of the current state of affairs.

Having said all that, I do have some sympathy with the following point made by Goodhart et al.

Given our uncertainty about sustainable growth, an NGDP target also has the obvious disadvantage that future certainty about inflation becomes much less than under an inflation (or price level) target. In order to estimate medium- and longer-term inflation rates, one has first to take some view about the likely sustainable trends in future real output. The latter is very difficult to do at the best of times, and the present is not the best of times. So shifting from an inflation to a nominal GDP growth target is likely to have the effect of raising uncertainty about future inflation and weakening the anchoring effect on expectations of the inflation target.

That is one reason why in my book Free Banking and Monetary Reform, I advocated Earl Thompson’s proposal for a labor standard aimed at stabilizing average wages (or, more precisely, average expected wages). But if you stabilize wages, and productivity is falling, then prices must rise. That’s just a matter of arithmetic. But there is no reason why the macroeconomically optimal rate of inflation should be invariant with respect to the rate of technological progress.

HT:  Bill Woolsey


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,911 other followers

Follow Uneasy Money on WordPress.com
Advertisements