Posts Tagged 'Paul Krugman'

My Paper “The Fisher Effect and the Financial Crisis of 2008” Is Now Available

Back in 2009 or 2010, I became intrigued by what seemed to me to be a consistent correlation between the tendency of the stock market to rise on news of monetary easing and potentially inflationary news. I suspected that there might be such a correlation because of my work on the Great Depression inspired by Earl Thompson, from whom I first learned about a monetary theory of the Great Depression very different from Friedman’s monetary theory expounded in his Monetary History of the United States. Thompson’s theory focused on disturbances in the gold market associated with the demonetization of gold during World War I and the attempt to restore the gold standard in the 1920s, which, by increasing the world demand for gold, was the direct cause of the deflation that led to the Great Depression.

I later came to discover that Ralph Hawtrey had already propounded Thompson’s theory in the 1920s almost a decade before the Great Depression started, and my friend and fellow student of Thompson, Ron Batchelder made a similar discovery about Gustave Cassel. Our shared recognition that Thompson’s seemingly original theory of the Great Depression had been anticipated by Hawtrey and Cassel led us to collaborate on our paper about Hawtrey and Cassel. As I began to see parallels between the financial fragility of the 1920s and the financial fragility that followed the housing bubble, I began to suspect that deflationary tendencies were also critical to the financial crisis of 2008.

So I began following daily fluctuations in the principal market estimate of expected inflation: the breakeven TIPS spread. I pretty quickly became persuaded that the correlation was powerful and meaningful, and I then collected data about TIPS spreads from 2003, when the Treasury began offering TIPS securities, to see if the correlation between expected inflation and asset prices had been present 2003 or was a more recent phenomenon.

My hunch was that the correlation would not be observed under normal macroeconomic conditions, because it is only when the expected yield from holding money approaches or exceeds the yield from holding real assets that an increase in expected inflation, by reducing the expected yield from holding money, would induce people to switch from holding money to holding assets, thereby driving up the value of assets.

And that’s what the data showed; the correlation between expected inflation and asset prices only emerged after in 2008 in the period after a recession started at the end of 2007, even before the start of the financial crisis exactly 10 years in September 2008. When I wrote up the paper and posted it (“The Fisher Effect Under Deflationary Expectations“), Scott Sumner, who had encouraged me to write up the results after I told him about my results, wrote a blogpost about the paper. Paul Krugman picked up on Scott’s post and wrote about it on his blog, generating a lot of interest in the paper.

Although I was confident that the data showed a strong correlation between inflation and stock prices after 2008, I was less confident that I had done the econometrics right, so I didn’t try to publish the original 2011 version of the paper. With Scott’s encouragement, I have continued to collected more data as time passed, confirming that the correlation remained even after the start of a recovery while short-term interest rates remained at or near the zero lower bound. The Mercatus Center whose Program on Monetary Policy is directed by Scott has just released the new version of the paper as a Working Paper. The paper can also be downloaded from SSRN.

Aside from longer time span covered, the new version of the paper has refined and extended the theoretical account for when and why a correlation between expected inflation and asset prices is likely be observed and when and why it is unlikely to be observed. I have also done some additional econometric testing beyond the basic ordinary least square (OLS) regression estimates originally presented, and explained why I think it is unlikely that more sophisticated econometric techniques such as an error-correction model would generate more reliable results than those generated by simple OLS regrissions. Perhaps in further work, I will attempt to actually construct an explicit error-correction model and compare the results using OLS and an error-correction model.

Here is the abstract of the new version of the paper.

This paper uses the Fisher equation relating the nominal interest rate to the real interest rate and
expected inflation to provide a deeper explanation of the financial crisis of 2008 and the subsequent recovery than attributing it to the bursting of the housing-price bubble. The paper interprets the Fisher equation as an equilibrium condition in which expected returns from holding real assets and cash are equalized. When inflation expectations decline, the return to holding cash rises relative to holding real assets. If nominal interest rates are above the zero lower bound, equilibrium is easily restored by adjustments in nominal interest rates and asset prices. But at the zero lower bound, nominal interest rates cannot fall, forcing the entire adjustment onto falling asset prices, thereby raising the expected real return from holding assets. Such an adjustment seems to have triggered the financial crisis of 2008, when the Federal Reserve delayed reducing nominal interest rates out of a misplaced fear of inflation in the summer of 2008 when the economy was already contracting rapidly. Using stock market price data and inflation-adjusted US Treasury securities data, the paper finds that, unlike the 2003–2007 period, when stock prices were uncorrelated with expected inflation, from 2008 through at least 2016, stock prices have been consistently and positively correlated with expected inflation.

The Well-Defined, but Nearly Useless, Natural Rate of Interest

Tyler Cowen recently posted a diatribe against the idea monetary policy should be conducted by setting the interest rate target of the central bank at or near the natural rate of interest. Tyler’s post elicited critical responses from Brad DeLong and Paul Krugman among others. I sympathize with Tyler’s impatience with the natural rate of interest as a guide to policy, but I think the scattershot approach he took in listing, seemingly at random, seven complaints against the natural rate of interest was not the best way to register dissatisfaction with the natural rate. Here’s Tyler’s list of seven complaints.

1 Knut Wicksell, inventor of the term “natural rate of interest,” argued that if the central bank set its target rate equal to the natural rate, it would avoid inflation and deflation and tame the business cycle. Wicksell’s argument was criticized by his friend and countryman David Davidson who pointed out that, with rising productivity, price stability would not result without monetary expansion, which would require the monetary authority to reduce its target rate of interest below the natural rate to induce enough investment to be financed by monetary expansion. Thus, when productivity is rising, setting the target rate of interest equal to the natural rate leads not to price stability, but to deflation.

2 Keynes rejected the natural rate as a criterion for monetary policy, because the natural rate is not unique. The natural rate varies with the level of income and employment.

3 Early Keynesians like Hicks, Hansen, and Modigliani rejected the natural rate as well.

4 The meaning of the natural rate has changed; it was once the rate that would result in a stable price level; now it’s the rate that results in a stable rate of inflation.

5 Friedman also rejected the natural rate because there is no guarantee that setting the target rate equal to the natural rate will result in the rate of money growth that Freidman believed was desirable.

6 Sraffa debunked the natural rate in his 1932 review of Hayek’s Prices and Production.

7 It seems implausible that the natural rate is now negative, as many exponents of the natural rate concept now claim, even though the economy is growing and the marginal productivity of capital is positive.

Let me try to tidy all this up a bit.

The first thing you need to know when thinking about the natural rate is that, like so much else in economics, you will become hopelessly confused if you don’t keep the Fisher equation, which decomposes the nominal rate of interest into the real rate of interest and the expected rate of inflation, in clear sight. Once you begin thinking about the natural rate in the context of the Fisher equation, it becomes obvious that the natural rate can be thought of coherently as either a real rate or a nominal rate, but the moment you are unclear about whether you are talking about a real natural rate or a nominal natural rate, you’re finished.

Thus, Wicksell was implicitly thinking about a situation in which expected inflation is zero so that the real and nominal natural rates coincide. If the rate of inflation is correctly expected to be zero, and the increase in productivity is also correctly expected, the increase in the quantity of money required to sustain a constant price level can be induced by the payment of interest on cash balances. Alternatively, if the payment of interest on cash balances is ruled out, the rate of capital accumulation (forced savings) could be increased sufficiently to cause the real natural interest rate under a constant price level to fall below the real natural interest rate under deflation.

In the Sraffa-Hayek episode, as Paul Zimmerman and I have shown in our paper on that topic, Sraffa failed to understand that the multiplicity of own rates of interest in a pure barter economy did not mean that there was not a unique real natural rate toward which arbitrage would force all the individual own rates to converge. At any moment, therefore, there is a unique real natural rate in a barter economy if arbitrage is operating to equalize the cost of borrowing in terms of every commodity. Moreover, even Sraffa did not dispute that, under Wicksell’s definition of the natural rate as the rate consistent with a stable price level, there is a unique natural rate. Sraffa’s quarrel was only with Hayek’s use of the natural rate, inasmuch as Hayek maintained that the natural rate did not imply a stable price level. Of course, Hayek was caught in a contradiction that Sraffa overlooked, because he identified the real natural rate with an equal nominal rate, so that he was implicitly assuming a constant expected price level even as he was arguing that the neutral monetary policy corresponding to setting the market interest rate equal to the natural rate would imply deflation when productivity was increasing.

I am inclined to be critical Milton Friedman about many aspects of his monetary thought, but one of his virtues as a monetary economist was that he consistently emphasized Fisher’s  distinction between real and nominal interest rates. The point that Friedman was making in the passage quoted by Tyler was that the monetary authority is able to peg nominal variables, prices, inflation, exchange rates, but not real variables, like employment, output, or interest rates. Even pegging the nominal natural rate is impossible, because inasmuch as the goal of targeting a nominal natural rate is to stabilize the rate of inflation, targeting the nominal natural rate also means targeting the real natural rate. But targeting the real natural rate is not possible, and trying to do so will just get you into trouble.

So Tyler should not be complaining about the change in the meaning of the natural rate; that change simply reflects the gradual penetration of the Fisher equation into the consciousness of the economics profession. We now realize that, given the real natural rate, there is, for every expected rate of inflation, a corresponding nominal natural rate.

Keynes made a very different contribution to our understanding of the natural rate. He was that there is no reason to assume that the real natural rate of interest is unique. True, at any moment there is some real natural rate toward which arbitrage is forcing all nominal rates to converge. But that real natural rate is a function of the prevailing economic conditions. Keynes believed that there are multiple equilibria, each corresponding to a different level of employment, and that associated with each of those equilibria there could be a different real natural rate. Nowadays, we are less inclined than was Keynes to call an underemployment situation an equilibrium, but there is still no reason to assume that the real natural rate that serves as an attractor for all nominal rates is independent of the state of the economy. If the real natural rate for an underperforming economy is less than the real natural rate that would be associated with the economy if it were in the neighborhood of an optimal equilibrium, there is no reason why either the real natural rate corresponding to an optimal equilibrium or the real natural rate corresponding to the current sub-optimal state of economy should be the policy rate that the monetary authority chooses as its target.

Finally, what can be said about Tyler’s point that it is implausible to suggest that the real natural rate is negative when the economy is growing (even slowly) and the marginal productivity of capital is positive? Two points.

First, the marginal productivity of gold is very close to zero. If gold is held as bullion, it is being held for expected appreciation over and above the cost of storage. So the ratio of the future price of gold to the spot price of gold should equal one plus the real rate of interest. If you look at futures prices for gold you will see that they are virtually the same as the spot price. However, storing gold is not costless. According to this article on Bloomberg.com, storage costs for gold range between 0.5 to 1% of the value of gold, implying that expected rate of return to holding gold is now less than -0.5% a year, which means that the marginal productivity of real capital is negative. Sure there are plenty of investments out there that are generating positive returns, but those are inframarginal investments. Those inframarginal investments are generating some net gain in productivity, and overall economic growth is positive, but that doesn’t mean that the return on investment at the margin is positive. At the margin, the yield on real capital seems to be negative.

If, as appears likely, our economy is underperforming, estimates of the real natural rate of interest are not necessarily an appropriate guide for the monetary authority in choosing its target rate of interest. If the aim of monetary policy is to nudge the economy onto a feasible growth path that is above the sub-optimal path along which it is currently moving, it might well be that the appropriate interest-rate target, as long as the economy remains below its optimal growth path, would be less than the natural rate corresponding to the current sub-optimal growth path.

Bernanke’s Continuing Confusion about How Monetary Policy Works

TravisV recently posted a comment on this blog with a link to his comment on Scott Sumner’s blog flagging two apparently contradictory rationales for the Fed’s quantitative easing policy in chapter 19 of Ben Bernanke’s new book in which he demurely takes credit for saving Western Civilization. Here are the two quotes from Bernanke:

1              Our goal was to bring down longer-term interest rates, such as the rates on thirty-year mortgages and corporate bonds. If we could do that, we might stimulate spending—on housing and business capital investment, for example…..Similarly, when we bought longer-term Treasury securities, such as a note maturing in ten years, the yields on those securities tended to decline.

2              A new era of monetary policy activism had arrived, and our announcement had powerful effects. Between the day before the meeting and the end of the year, the Dow would rise more than 3,000 points—more than 40 percent—to 10,428. Longer-term interest rates fell on our announcement, with the yield on ten-year Treasury securities dropping from about 3 percent to about 2.5 percent in one day, a very large move. Over the summer, longer-term yields would reverse and rise to above 4 percent. We would see that increase as a sign of success. Higher yields suggested that investors were expecting both more growth and higher inflation, consistent with our goal of economic revival. Indeed, after four quarters of contraction, revised data would show that the economy would grow at a 1.3 percent rate in the third quarter and a 3.9 percent rate in the fourth.

Over my four years of blogging — especially the first two – I have written a number of posts pointing out that the Fed’s articulated rationale for its quantitative easing – the one expressed in quote number 1 above: that quantitative easing would reduce long-term interest rates and stimulate the economy by promoting investment – was largely irrelevant, because the magnitude of the effect would be far too small to have any noticeable macroeconomic effect.

In making this argument, Bernanke bought into one of the few propositions shared by both Keynes and the Austrians: that monetary policy is effective by operating on long-term interest rates, and that significant investments by business in plant and equipment are responsive to relatively small changes in long-term rates. Keynes, at any rate, had the good sense to realize that long-term investment in plant and equipment is not very responsive to changes in long-term interest rates – a view he had espoused in his Treatise on Money before emphasizing, in the General Theory, expectations about future prices and profitability as the key factor governing investment. Austrians, however, never gave up their theoretical preoccupation with the idea that the entire structural profile of a modern economy is dominated by small changes in the long-term rate of interest.

So for Bernanke’s theory of how QE would be effective to be internally consistent, he would have had to buy into a hyper-Austrian view of how the economy works, which he obviously doesn’t and never did. Sometimes internal inconsistency can be a sign that being misled by bad theory hasn’t overwhelmed a person’s good judgment. So I say even though he botched the theory, give Bernanke credit for his good judgment. Unfortunately, Bernanke’s confusion made it impossible for him to communicate a coherent story about how monetary policy, undermining, or at least compromising, his ability to build popular support for the policy.

Of course the problem was even deeper than expecting a marginal reduction in long-term interest rates to have any effect on the economy. The Fed’s refusal to budge from its two-percent inflation target, drastically limited the potential stimulus that monetary policy could provide.

I might add that I just noticed that I had already drawn attention to Bernanke’s inconsistent rationale for adopting QE in my paper “The Fisher Effect Under Deflationary Expectations” written before I started this blog, which both Scott Sumner and Paul Krugman plugged after I posted it on SSRN.

Here’s what I said in my paper (p. 18):

If so, the expressed rationale for the Fed’s quantitative easing policy (Bernanke 2010), namely to reduce long term interest rates, thereby stimulating spending on investment and consumption, reflects a misapprehension of the mechanism by which the policy would be most likely to operate, increasing expectations of both inflation and future profitability and, hence, of the cash flows derived from real assets, causing asset values to rise in step with both inflation expectations and real interest rates. Rather than a policy to reduce interest rates, quantitative easing appears to be a policy for increasing interest rates, though only as a consequence of increasing expected future prices and cash flows.

I wrote that almost five years ago, and it still seems pretty much on the mark.

What Is the Historically Challenged, Rule-Worshipping John Taylor Talking About?

A couple of weeks ago, I wrote a post chiding John Taylor for his habitual verbal carelessness. As if that were not enough, Taylor, in a recent talk at the IMF, appearing on a panel on monetary policy with former Fed Chairman Ben Bernanke and the former head of the South African central bank, Gill Marcus,  extends his trail of errors into new terrain: historical misstatement. Tony Yates and Paul Krugman have already subjected Taylor’s talk to well-deserved criticism for its conceptual confusion, but I want to focus on the outright historical errors Taylor blithely makes in his talk, a talk noteworthy, apart from its conceptual confusion and historical misstatements, for the incessant repetition of the meaningless epithet “rules-based,” as if he were a latter-day Homeric rhapsodist incanting a sacred text.

Taylor starts by offering his own “mini history of monetary policy in the United States” since the late 1960s.

When I first started doing monetary economics . . ., monetary policy was highly discretionary and interventionist. It went from boom to bust and back again, repeatedly falling behind the curve, and then over-reacting. The Fed had lofty goals but no consistent strategy. If you measure macroeconomic performance as I do by both price stability and output stability, the results were terrible. Unemployment and inflation both rose.

What Taylor means by “interventionist,” other than establishing that he is against it, is not clear. Nor is the meaning of “bust” in this context. The recession of 1970 was perhaps the mildest of the entire post-World War II era, and the 1974-75 recession was certainly severe, but it was largely the result of a supply shock and politically imposed wage and price controls exacerbated by monetary tightening. (See my post about 1970s stagflation.) Taylor talks about the Fed’s lofty goals, but doesn’t say what they were. In fact in the 1970s, the Fed was disclaiming responsibility for inflation, and Arthur Burns, a supposedly conservative Republican economist, appointed by Nixon to be Fed Chairman, actually promoted what was then called an “incomes policy,” thereby enabling and facilitating Nixon’s infamous wage-and-price controls. The Fed’s job was to keep aggregate demand high, and, in the widely held view at the time, it was up to the politicians to keep business and labor from getting too greedy and causing inflation.

Then in the early 1980s policy changed. It became more focused, more systematic, more rules-based, and it stayed that way through the 1990s and into the start of this century.

Yes, in the early 1980s, policy did change, and it did become more focused, and for a short time – about a year and a half – it did become more rules-based. (I have no idea what “systematic” means in this context.) And the result was the sharpest and longest post-World War II downturn until the Little Depression. Policy changed, because, under Volcker, the Fed took ownership of inflation. It became more rules-based, because, under Volcker, the Fed attempted to follow a modified sort of Monetarist rule, seeking to keep the growth of the monetary aggregates within a pre-determined target range. I have explained in my book and in previous posts (e.g., here and here) why the attempt to follow a Monetarist rule was bound to fail and why the attempt would have perverse feedback effects, but others, notably Charles Goodhart (discoverer of Goodhart’s Law), had identified the problem even before the Fed adopted its misguided policy. The recovery did not begin until the summer of 1982 after the Fed announced that it would allow the monetary aggregates to grow faster than the Fed’s targets.

So the success of the Fed monetary policy under Volcker can properly be attributed to a) to the Fed’s taking ownership of inflation and b) to its decision to abandon the rules-based policy urged on it by Milton Friedman and his Monetarist acolytes like Alan Meltzer whom Taylor now cites approvingly for supporting rules-based policies. The only monetary policy rule that the Fed ever adopted under Volcker having been scrapped prior to the beginning of the recovery from the 1981-82 recession, the notion that the Great Moderation was ushered in by the Fed’s adoption of a “rules-based” policy is a total misrepresentation.

But Taylor is not done.

Few complained about spillovers or beggar-thy-neighbor policies during the Great Moderation.  The developed economies were effectively operating in what I call a nearly international cooperative equilibrium.

Really! Has Professor Taylor, who served as Under Secretary of the Treasury for International Affairs ever heard of the Plaza and the Louvre Accords?

The Plaza Accord or Plaza Agreement was an agreement between the governments of France, West Germany, Japan, the United States, and the United Kingdom, to depreciate the U.S. dollar in relation to the Japanese yen and German Deutsche Mark by intervening in currency markets. The five governments signed the accord on September 22, 1985 at the Plaza Hotel in New York City. (“Plaza Accord” Wikipedia)

The Louvre Accord was an agreement, signed on February 22, 1987 in Paris, that aimed to stabilize the international currency markets and halt the continued decline of the US Dollar caused by the Plaza Accord. The agreement was signed by France, West Germany, Japan, Canada, the United States and the United Kingdom. (“Louvre Accord” Wikipedia)

The chart below shows the fluctuation in the trade weighted value of the US dollar against the other major trading currencies since 1980. Does it look like there was a nearly international cooperative equilibrium in the 1980s?

taylor_dollar_tradeweighted

But then there was a setback. The Fed decided to hold the interest rate very low during 2003-2005, thereby deviating from the rules-based policy that worked well during the Great Moderation.  You do not need policy rules to see the change: With the inflation rate around 2%, the federal funds rate was only 1% in 2003, compared with 5.5% in 1997 when the inflation rate was also about 2%.

Well, in 1997 the expansion was six years old and the unemployment rate was under 5% and falling. In 2003, the expansion was barely under way and unemployment was rising above 6%.

I could provide other dubious historical characterizations that Taylor makes in his talk, but I will just mention a few others relating to the Volcker episode.

Some argue that the historical evidence in favor of rules is simply correlation not causation.  But this ignores the crucial timing of events:  in each case, the changes in policy occurred before the changes in performance, clear evidence for causality.  The decisions taken by Paul Volcker came before the Great Moderation.

Yes, and as I pointed out above, inflation came down when Volcker and the Fed took ownership of the inflation, and were willing to tolerate or inflict sufficient pain on the real economy to convince the public that the Fed was serious about bringing the rate of inflation down to a rate of roughly 4%. But the recovery and the Great Moderation did not begin until the Fed renounced the only rule that it had ever adopted, namely targeting the rate of growth of the monetary aggregates. The Fed, under Volcker, never even adopted an explicit inflation target, much less a specific rule for setting the Federal Funds rate. The Taylor rule was just an ex post rationalization of what the Fed had done by instinct.

Another point relates to the zero bound. Wasn’t that the reason that the central banks had to deviate from rules in recent years? Well it was certainly not a reason in 2003-2005 and it is not a reason now, because the zero bound is not binding. It appears that there was a short period in 2009 when zero was clearly binding. But the zero bound is not a new thing in economics research. Policy rule design research took that into account long ago. The default was to move to a stable money growth regime not to massive asset purchases.

OMG! Is Taylor’s preferred rule at the zero lower bound the stable money growth rule that Volcker tried, but failed, to implement in 1981-82? Is that the lesson that Taylor wants us to learn from the Volcker era?

Some argue that rules based policy for the instruments is not needed if you have goals for the inflation rate or other variables. They say that all you really need for effective policy making is a goal, such as an inflation target and an employment target. The rest of policymaking is doing whatever the policymakers think needs to be done with the policy instruments. You do not need to articulate or describe a strategy, a decision rule, or a contingency plan for the instruments. If you want to hold the interest rate well below the rule-based strategy that worked well during the Great Moderation, as the Fed did in 2003-2005, then it’s ok as long as you can justify it at the moment in terms of the goal.

This approach has been called “constrained discretion” by Ben Bernanke, and it may be constraining discretion in some sense, but it is not inducing or encouraging a rule as a “rules versus discretion” dichotomy might suggest.  Simply having a specific numerical goal or objective is not a rule for the instruments of policy; it is not a strategy; it ends up being all tactics.  I think the evidence shows that relying solely on constrained discretion has not worked for monetary policy.

Taylor wants a rule for the instruments of policy. Well, although Taylor will not admit it, a rule for the instruments of policy is precisely what Volcker tried to implement in 1981-82 when he was trying — and failing — to target the monetary aggregates, thereby driving the economy into a rapidly deepening recession, before escaping from the positive-feedback loop in which he and the economy were trapped by scrapping his monetary growth targets. Since 2009, Taylor has been calling for the Fed to raise the currently targeted instrument, the Fed Funds rate, even though inflation has been below the Fed’s 2% target almost continuously for the past three years. Not only does Taylor want to target the instrument of policy, he wants the instrument target to preempt the policy target. If that is not all tactics and no strategy, I don’t know what is.

A Keynesian Postscript on the Bright and Shining, Dearly Beloved, Depression of 1920-21

In his latest blog post Paul Krugman drew my attention to Keynes’s essay The Great Slump of 1930. In describing the enormity of the 1930 slump, Keynes properly compared the severity of the 1930 slump with the 1920-21 episode, noting that the price decline in 1920-21 was of a similar magnitude to that of 1930. James Grant, in his book on the Greatest Depression, argues that the Greatest Depression was so outstanding, because, in contrast to the Great Depression, there was no attempt by the government in 1920-21 to cushion the blow. Instead, the powers that be just stood back and let the devil take the hindmost.

Keynes had a different take on the difference between the Greatest Depression and the Great Depression:

First of all, the extreme violence of the slump is to be noticed. In the three leading industrial countries of the world—the United States, Great Britain, and Germany—10,000,000 workers stand idle. There is scarcely an important industry anywhere earning enough profit to make it expand—which is the test of progress. At the same time, in the countries of primary production the output of mining and of agriculture is selling, in the case of almost every important commodity, at a price which, for many or for the majority of producers, does not cover its cost. In 1921, when prices fell as heavily, the fall was from a boom level at which producers were making abnormal profits; and there is no example in modern history of so great and rapid a fall of prices from a normal figure as has occurred in the past year. Hence the magnitude of the catastrophe.

In diagnosing what went wrong in the Great Depression, Keynes largely, though not entirely, missed the most important cause of the catastrophe, the appreciation of gold caused by the attempt to restore an international gold standard without a means by which to control the monetary demand for gold of the world’s central banks — most notoriously, the insane Bank of France. Keynes should have paid more attention to Hawtrey and Cassel than he did. But Keynes was absolutely on target in explaining why the world more easily absorbed and recovered from a 40% deflation in 1920-21 than it was able to do in 1929-33.

John Cochrane, Meet Richard Lipsey and Kenneth Carlaw

Paul Krugman wrote an uncharacteristically positive post today about John Cochrane’s latest post in which Cochrane dialed it down a bit after writing two rather heated posts (here and here) attacking Alan Blinder for a recent piece he wrote in the New York Review of Books in which Blinder wrote dismissively quoted Cochrane’s dismissive remark about Keynesian economics being fairy tales that haven’t been taught to graduate students since the 1960s. I don’t want to get into that fracas, but I was amused to read the following paragraphs at the end of Cochrane’s second post in the current series.

Thus, if you read Krugman’s columns, you will see him occasionally crowing about how Keynesian economics won, and how the disciples of Stan Fisher at MIT have spread out to run the world. He’s right. Then you see him complaining about how nobody in academia understands Keynesian economics. He’s right again.

Perhaps academic research ran off the rails for 40 years producing nothing of value. Social sciences can do that. Perhaps our policy makers are stuck with simple stories they learned as undergraduates; and, as has happened countless times before, new ideas will percolate up when the generation trained in the 1980s makes their way to to top of policy circles.

I think we can agree on something. If one wants to write about “what’s wrong with economics,” such a huge divide between academic research ideas and the ideas running our policy establishment is not a good situation.

The right way to address this is with models — written down, objective models, not pundit prognostications — and data. What accounts, quantitatively, for our experience?  I see old-fashioned Keynesianism losing because, having dramatically failed that test once, its advocates are unwilling to do so again, preferring a campaign of personal attack in the popular press. Models confront data in the pages of the AER, the JPE, the QJE, and Econometrica. If old-time Keynesianism really does account for the data, write it down and let’s see.

So Cochrane wants to take this bickering out of the realm of punditry and put the conflicting models to an objective test of how well they perform against the data. Sounds good to me, but I can’t help but wonder if Cochrane means to attribute the academic ascendancy of RBC/New Classical models to their having empirically outperformed competing models? If so, I am not aware that anyone else has made that claim, including Kartik Athreya who wrote the book on the subject. (Here’s my take on the book.) Again just wondering – I am not a macroeconometrician – but is there any study showing that RBC or DSGE models outperform old-fashioned Keynesian models in explaining macro-time-series data?

But I am aware of, and have previously written about, a paper by Kenneth Carlaw and Richard Lipsey (“Does History Matter?: Empirical Analysis of Evolutionary versus Stationary Equilibrium Views of the Economy”) in which they show that time-series data for six OECD countries provide no evidence of the stylized facts about inflation and unemployment implied by RBC and New Keynesian theory. Here is the abstract from the Carlaw-Lipsey paper.

The evolutionary vision in which history matters is of an evolving economy driven by bursts of technological change initiated by agents facing uncertainty and producing long term, path-dependent growth and shorter-term, non-random investment cycles. The alternative vision in which history does not matter is of a stationary, ergodic process driven by rational agents facing risk and producing stable trend growth and shorter term cycles caused by random disturbances. We use Carlaw and Lipsey’s simulation model of non-stationary, sustained growth driven by endogenous, path-dependent technological change under uncertainty to generate artificial macro data. We match these data to the New Classical stylized growth facts. The raw simulation data pass standard tests for trend and difference stationarity, exhibiting unit roots and cointegrating processes of order one. Thus, contrary to current belief, these tests do not establish that the real data are generated by a stationary process. Real data are then used to estimate time-varying NAIRU’s for six OECD countries. The estimates are shown to be highly sensitive to the time period over which they are made. They also fail to show any relation between the unemployment gap, actual unemployment minus estimated NAIRU and the acceleration of inflation. Thus there is no tendency for inflation to behave as required by the New Keynesian and earlier New Classical theory. We conclude by rejecting the existence of a well-defined a short-run, negatively sloped Philips curve, a NAIRU, a unique general equilibrium, short and long-run, a vertical long-run Phillips curve, and the long-run neutrality of money.

Cochrane, like other academic macroeconomists with a RBC/New Classical orientation seems inordinately self-satisfied with the current state of the modern macroeconomics, but curiously sensitive to, and defensive about, criticism from the unwashed masses. Rather than weigh in again with my own criticisms, let me close by quoting another abstract – this one from a paper (“Complexity Eonomics: A Different Framework for Economic Thought”) by Brian Arthur, certainly one of the smartest, and most technically capable, economists around.

This paper provides a logical framework for complexity economics. Complexity economics builds from the proposition that the economy is not necessarily in equilibrium: economic agents (firms, consumers, investors) constantly change their actions and strategies in response to the outcome they mutually create. This further changes the outcome, which requires them to adjust afresh. Agents thus live in a world where their beliefs and strategies are constantly being “tested” for survival within an outcome or “ecology” these beliefs and strategies together create. Economics has largely avoided this nonequilibrium view in the past, but if we allow it, we see patterns or phenomena not visible to equilibrium analysis. These emerge probabilistically, last for some time and dissipate, and they correspond to complex structures in other fields. We also see the economy not as something given and existing but forming from a constantly developing set of technological innovations, institutions, and arrangements that draw forth further innovations, institutions and arrangements.

Complexity economics sees the economy as in motion, perpetually “computing” itself — perpetually constructingitself anew. Where equilibrium economics emphasizes order, determinacy, deduction, and stasis, complexity economics emphasizes contingency, indeterminacy, sense-making, and openness to change. In this framework time, in the sense of real historical time, becomes important, and a solution is no longer necessarily a set of mathematical conditions but a pattern, a set of emergent phenomena, a set of changes that may induce further changes, a set of existing entities creating novel entities. Equilibrium economics is a special case of nonequilibrium and hence complexity economics, therefore complexity economics is economics done in a more general way. It shows us an economy perpetually inventing itself, creating novel structures and possibilities for exploitation, and perpetually open to response.

HT: Mike Norman

The Nearly Forgotten Dearly Beloved 1920-21 Depression Yet Again; Or, Never Reason from a Quantity Change

The industrious James Grant recently published a book about the 1920-21 Depression. It has received enthusiastic reviews in the Wall Street Journal and Barron’s, was the subject of an admiring column by Washington Post columnist Robert J. Samuelson, and was celebrated at a Cato Institute panel discussion, luncheon, and book-signing event. The Cato extravaganza elicited a dismissive blog post by Barkley Rosser which was linked to by Paul Krugman on his blog. The Rosser/Krugman tandem provoked an unhappy reply on the Free Banking blog from George Selgin who chaired the Cato panel discussion. And the 1920-21 Depression is now the latest hot topic in the econblogosphere.

I am afraid that there are multiple layers of errors and confusion that are being mixed up and compounded in this discussion, errors and confusion derived from basic misunderstandings about how the gold standard operated that have been plaguing the economics profession and the financial world for about two and a half centuries. If you want to understand how the gold standard worked, what you have to read is the book by Ralph Hawtrey entitled – drum roll, please – The Gold Standard.

Here are the basic things you need to know about the gold standard.

1 The gold standard operates by creating an equivalence between a currency unit and a fixed amount of gold.

2 The gold standard does not require gold to circulate as money in the form of coins. That was historically the case, but a gold standard can function with no gold coins or even gold certificates.

3 The value of a currency unit and the value of a corresponding weight of gold are necessarily equalized by arbitrage.

4 Equality between a currency unit and a corresponding weight of gold does not necessarily show the direction of causality; the currency unit may determine the value of gold, not the other way around. In other words, making gold the standard of value for currency affects the demand for gold which affects the value of gold. Decisions made by monetary authorities under the gold standard necessarily affect the value of gold, so a gold standard does not somehow make the value of money independent of monetary policy.

5 When more than one country is on a gold standard, the countries share a common price level, because the value of gold is determined in an international market.

Keeping those basics in mind, let’s quickly try to understand what was going on in 1920 when the Fed decided to raise its discount rate to the then unprecedented level of 7 percent. But the situation in 1920 was the outcome of the previous six years of World War I that effectively destroyed the gold standard as a functioning institution, even though its existence was in some sense still legally recognized.

Under the gold standard, gold was the ultimate way of discharging international debts. In World War I, belligerents had to pay for imports with gold, thus governments amassed all available gold with which to pay for the imports required to support the war effort. Gold coins were melted down and converted to bullion so the gold could be exported. For a private citizen in a belligerent country to demand that the national currency unit be converted to gold would be considered an unpatriotic if not a treasonous act. So the gold standard ceased to function in belligerent countries. In non-belligerent countries, which were busy exporting to the belligerents, the result was a massive inflow of gold, causing a spectacular increase in the amount of gold held by the US Treasury between 1914 and 1917. Other non-belligerents like Sweden and Switzerland experienced similar inflows.

Quantity theorists and Monetarists like Milton Friedman habitually misinterpret the wartime inflation, and attributing the inflation to an inflow of gold that increased the money supply, thereby perpetrating the price-specie-flow-mechanism fallacy. What actually happened was that the huge demonetization of gold coins by the belligerents and their export of large quantities of gold to non-belligerent countries in which a free market in gold continued to operate drove down the value of gold. A falling value of gold under a gold standard logically implies rising prices for all other goods and services. Rising prices increased the nominal demand for money, which more or less automatically caused a corresponding adjustment in the quantity of money. A rising price level caused the quantity of money to increase, not the other way around.

In 1917, just before the US entered the war, the US, still effectively on a gold standard as gold flowed into the Treasury, had experienced a drastic inflation, like all other gold standard countries, because gold was rapidly losing value, as it was being demonetized and exported by the belligerent countries. But when the US entered the war in 1917, the US, like other belligerents, suspended operation of the gold standard, thereby accelerating the depreciation of gold, forcing the few remaining countries on the gold standard to suspend the gold standard to avoid runaway inflation. Inflationary pressure in the US did increase after entry into the war, but the war-induced fiat inflation, to some extent suppressed or disguised by price controls, was actually slower than inflation in terms of gold.

When the war ended, the US went back on the gold standard by again making the dollar convertible into gold at the legal parity. Doing so meant that the US price level in terms of dollars was below the notional (no currency any longer being convertible into gold) world price level in terms of gold. In other belligerent countries, notably Britain, France and Germany, inflation in terms of their national currencies exceeded gold inflation, requiring them to deflate even to restore the legal parity in terms of gold.  Thus, the US was the only country in the world that was both willing and able to return to the gold standard at the prewar parity. Sweden and Switzerland could have done so, but preferred to avoid the inflationary consequences of a return to the gold standard.

Once the dollar convertibility into gold was restored, arbitrage forced the US price level to rise to so that it would equal the gold price level. The excess of the gold price level over the US price level level explains the anomalous post-war inflation – everyone knows that prices are supposed to fall, not rise, when a war ends — in the US. The rest of the world, then, had to choose between accepting US inflation, by keeping their currencies pegged to the dollar, or allowing their currencies to appreciate against the dollar. The anomalous post-war inflation was caused by the reequilibration of the US price level to the gold price levels, not, as commonly supposed, by Fed inexperience or incompetence.

To stop the post-war inflation, the Fed could have simply abandoned the gold standard, or it could have revalued the dollar in terms of gold, by reducing the official dollar price of gold. (I ignore the minor detail that the official dollar price of gold was then determined by statute.) Instead, the Fed — whether knowingly or not I can’t say – chose to increase the value of gold. The method by which it did so was to raise its discount rate, thereby making it easier to obtain dollars by selling gold to the Treasury than to borrow from the Fed. The flood of gold into the Treasury in 1920-21 succeeded in taking a huge amount of gold out of private and public hands, thus driving up the real value of gold, and forcing down the gold price level. That’s when the brutal deflation of 1920-21 started. At some point, the Fed and the Treasury decided that they had had enough, having amassed about 40% of the world’s gold reserves, and began reducing the discount rate, thereby slowing the inflow of gold into the US, and stopping its appreciation. And that’s when and how the dearly beloved, but quite dreadful, depression of 1920-21 came to an end.

Just How Infamous Was that Infamous Open Letter to Bernanke?

There’s been a lot of comment recently about the infamous 2010 open letter to Ben Bernanke penned by an assorted group of economists, journalists, and financiers warning that the Fed’s quantitative easing policy would cause inflation and currency debasement.

Critics of that letter (e.g., Paul Krugman and Brad Delong) have been having fun with the signatories, ridiculing them for what now seems like a chicken-little forecast of disaster. Those signatories who have responded to inquiries about how they now feel about that letter, notably Cliff Asness and Nial Ferguson, have made two arguments: 1) the letter was just a warning that QE was creating a risk of inflation, and 2) despite the historically low levels of inflation since the letter was written, the risk that inflation could increase as a result of QE still exists.

For the most part, critics of the open letter have focused on the absence of inflation since the Fed adopted QE, the critics characterizing the absence of inflation despite QE as an easily predictable outcome, a straightforward implication of basic macroeconomics, which it was ignorant or foolish of the signatories to have ignored. In particular, the signatories should have known that, once interest rates fall to the zero lower bound, the demand for money becoming highly elastic so that the public willingly holds any amount of money that is created, monetary policy is rendered ineffective. Just as a semantic point, I would observe that the term “liquidity trap” used to describe such a situation is actually a slight misnomer inasmuch as the term was coined to describe a situation posited by Keynes in which the demand for money becomes elastic above the zero lower bound. So the assertion that monetary policy is ineffective at the zero lower bound is actually a weaker claim than the one Keynes made about the liquidity trap. As I have suggested previously, the current zero-lower-bound argument is better described as a Hawtreyan credit deadlock than a Keynesian liquidity trap.

Sorry, but I couldn’t resist the parenthetical history-of-thought digression; let’s get back to that infamous open letter.

Those now heaping scorn on signatories to the open letter are claiming that it was obvious that quantitative easing would not increase inflation. I must confess that I did not think that that was the case; I believed that quantitative easing by the Fed could indeed produce inflation. And that’s why I was in favor of quantitative easing. I was hoping for a repeat of what I have called the short but sweat recovery of 1933, when, in the depths of the Great Depression, almost immediately following the worst financial crisis in American history capped by a one-week bank holiday announced by FDR upon being inaugurated President in March 1933, the US economy, propelled by a 14% rise in wholesale prices in the aftermath of FDR’s suspension of the gold standard and 40% devaluation of the dollar, began the fastest expansion it ever had, industrial production leaping by 70% from April to July, and the Dow Jones average more than doubling. Unfortunately, FDR spoiled it all by getting Congress to pass the monumentally stupid National Industrial Recovery Act, thereby strangling the recovery with mandatory wage increases, cost increases, and regulatory ceilings on output as a way to raise prices. Talk about snatching defeat from the jaws of victory!

Inflation having worked splendidly as a recovery strategy during the Great Depression, I have believed all along that we could quickly recover from the Little Depression if only we would give inflation a chance. In the Great Depression, too, there were those that argued either that monetary policy is ineffective – “you can’t push on a string” — or that it would be calamitous — causing inflation and currency debasement – or, even both. But the undeniable fact is that inflation worked; countries that left the gold standard recovered, because once currencies were detached from gold, prices could rise sufficiently to make production profitable again, thereby stimulating multiplier effects (aka supply-side increases in resource utilization) that fueled further economic expansion. And oh yes, don’t forget providing badly needed relief to debtors, relief that actually served the interests of creditors as well.

So my problem with the open letter to Bernanke is not that the letter failed to recognize the existence of a Keynesian liquidity trap or a Hawtreyan credit deadlock, but that the open letter viewed inflation as the problem when, in my estimation at any rate, inflation is the solution.

Now, it is certainly possible that, as critics of the open letter maintain, monetary policy at the zero lower bound is ineffective. However, there is evidence that QE announcements, at least initially, did raise inflation expectations as reflected in TIPS spreads. And we also know (see my paper) that for a considerable period of time (from 2008 through at least 2012) stock prices were positively correlated with inflation expectations, a correlation that one would not expect to observe under normal circumstances.

So why did the huge increase in the monetary base during the Little Depression not cause significant inflation even though monetary policy during the Great Depression clearly did raise the price level in the US and in the other countries that left the gold standard? Well, perhaps the success of monetary policy in ending the Great Depression could not be repeated under modern conditions when all currencies are already fiat currencies. It may be that, starting from an interwar gold standard inherently biased toward deflation, abandoning the gold standard created, more or less automatically, inflationary expectations that allowed prices to rise rapidly toward levels consistent with a restoration of macroeconomic equilibrium. However, in the current fiat money system in which inflation expectations have become anchored to an inflation target of 2 percent or less, no amount of money creation can budge inflation off its expected path, especially at the zero lower bound, and especially when the Fed is paying higher interest on reserves than yielded by short-term Treasuries.

Under our current inflation-targeting monetary regime, the expectation of low inflation seems to have become self-fulfilling. Without an explicit increase in the inflation target or the price-level target (or the NGDP target), the Fed cannot deliver the inflation that could provide a significant economic stimulus. So the problem, it seems to me, is not that we are stuck in a liquidity trap; the problem is that we are stuck in an inflation-targeting monetary regime.

 

Explaining the Hegemony of New Classical Economics

Simon Wren-Lewis, Robert Waldmann, and Paul Krugman have all recently devoted additional space to explaining – ruefully, for the most part – how it came about that New Classical Economics took over mainstream macroeconomics just about half a century after the Keynesian Revolution. And Mark Thoma got them all started by a complaint about the sorry state of modern macroeconomics and its failure to prevent or to cure the Little Depression.

Wren-Lewis believes that the main problem with modern macro is too much of a good thing, the good thing being microfoundations. Those microfoundations, in Wren-Lewis’s rendering, filled certain gaps in the ad hoc Keynesian expenditure functions. Although the gaps were not as serious as the New Classical School believed, adding an explicit model of intertemporal expenditure plans derived from optimization conditions and rational expectations, was, in Wren-Lewis’s estimation, an improvement on the old Keynesian theory. The improvements could have been easily assimilated into the old Keynesian theory, but weren’t because New Classicals wanted to junk, not improve, the received Keynesian theory.

Wren-Lewis believes that it is actually possible for the progeny of Keynes and the progeny of Fisher to coexist harmoniously, and despite his discomfort with the anti-Keynesian bias of modern macroeconomics, he views the current macroeconomic research program as progressive. By progressive, I interpret him to mean that macroeconomics is still generating new theoretical problems to investigate, and that attempts to solve those problems are producing a stream of interesting and useful publications – interesting and useful, that is, to other economists doing macroeconomic research. Whether the problems and their solutions are useful to anyone else is perhaps not quite so clear. But even if interest in modern macroeconomics is largely confined to practitioners of modern macroeconomics, that fact alone would not conclusively show that the research program in which they are engaged is not progressive, the progressiveness of the research program requiring no more than a sufficient number of self-selecting econ grad students, and a willingness of university departments and sources of research funding to cater to the idiosyncratic tastes of modern macroeconomists.

Robert Waldmann, unsurprisingly, takes a rather less charitable view of modern macroeconomics, focusing on its failure to discover any new, previously unknown, empirical facts about macroeconomic, or to better explain known facts than do alternative models, e.g., by more accurately predicting observed macro time-series data. By that, admittedly, demanding criterion, Waldmann finds nothing progressive in the modern macroeconomics research program.

Paul Krugman weighed in by emphasizing not only the ideological agenda behind the New Classical Revolution, but the self-interest of those involved:

Well, while the explicit message of such manifestos is intellectual – this is the only valid way to do macroeconomics – there’s also an implicit message: from now on, only my students and disciples will get jobs at good schools and publish in major journals/ And that, to an important extent, is exactly what happened; Ken Rogoff wrote about the “scars of not being able to publish stick-price papers during the years of new classical repression.” As time went on and members of the clique made up an ever-growing share of senior faculty and journal editors, the clique’s dominance became self-perpetuating – and impervious to intellectual failure.

I don’t disagree that there has been intellectual repression, and that this has made professional advancement difficult for those who don’t subscribe to the reigning macroeconomic orthodoxy, but I think that the story is more complicated than Krugman suggests. The reason I say that is because I cannot believe that the top-ranking economics departments at schools like MIT, Harvard, UC Berkeley, Princeton, and Penn, and other supposed bastions of saltwater thinking have bought into the underlying New Classical ideology. Nevertheless, microfounded DSGE models have become de rigueur for any serious academic macroeconomic theorizing, not only in the Journal of Political Economy (Chicago), but in the Quarterly Journal of Economics (Harvard), the Review of Economics and Statistics (MIT), and the American Economic Review. New Keynesians, like Simon Wren-Lewis, have made their peace with the new order, and old Keynesians have been relegated to the periphery, unable to publish in the journals that matter without observing the generally accepted (even by those who don’t subscribe to New Classical ideology) conventions of proper macroeconomic discourse.

So I don’t think that Krugman’s ideology plus self-interest story fully explains how the New Classical hegemony was achieved. What I think is missing from his story is the spurious methodological requirement of microfoundations foisted on macroeconomists in the course of the 1970s. I have discussed microfoundations in a number of earlier posts (here, here, here, here, and here) so I will try, possibly in vain, not to repeat myself too much.

The importance and desirability of microfoundations were never questioned. What, after all, was the neoclassical synthesis, if not an attempt, partly successful and partly unsuccessful, to integrate monetary theory with value theory, or macroeconomics with microeconomics? But in the early 1970s the focus of attempts, notably in the 1970 Phelps volume, to provide microfoundations changed from embedding the Keynesian system in a general-equilibrium framework, as Patinkin had done, to providing an explicit microeconomic rationale for the Keynesian idea that the labor market could not be cleared via wage adjustments.

In chapter 19 of the General Theory, Keynes struggled to come up with a convincing general explanation for the failure of nominal-wage reductions to clear the labor market. Instead, he offered an assortment of seemingly ad hoc arguments about why nominal-wage adjustments would not succeed in reducing unemployment, enabling all workers willing to work at the prevailing wage to find employment at that wage. This forced Keynesians into the awkward position of relying on an argument — wages tend to be sticky, especially in the downward direction — that was not really different from one used by the “Classical Economists” excoriated by Keynes to explain high unemployment: that rigidities in the price system – often politically imposed rigidities – prevented wage and price adjustments from equilibrating demand with supply in the textbook fashion.

These early attempts at providing microfoundations were largely exercises in applied price theory, explaining why self-interested behavior by rational workers and employers lacking perfect information about all potential jobs and all potential workers would not result in immediate price adjustments that would enable all workers to find employment at a uniform market-clearing wage. Although these largely search-theoretic models led to a more sophisticated and nuanced understanding of labor-market dynamics than economists had previously had, the models ultimately did not provide a fully satisfactory account of cyclical unemployment. But the goal of microfoundations was to explain a certain set of phenomena in the labor market that had not been seriously investigated, in the hope that price and wage stickiness could be analyzed as an economic phenomenon rather than being arbitrarily introduced into models as an ad hoc, albeit seemingly plausible, assumption.

But instead of pursuing microfoundations as an explanatory strategy, the New Classicals chose to impose it as a methodological prerequisite. A macroeconomic model was inadmissible unless it could be explicitly and formally derived from the optimizing choices of fully rational agents. Instead of trying to enrich and potentially transform the Keynesian model with a deeper analysis and understanding of the incentives and constraints under which workers and employers make decisions, the New Classicals used microfoundations as a methodological tool by which to delegitimize Keynesian models, those models being insufficiently or improperly microfounded. Instead of using microfoundations as a method by which to make macroeconomic models conform more closely to the imperfect and limited informational resources available to actual employers deciding to hire or fire employees, and actual workers deciding to accept or reject employment opportunities, the New Classicals chose to use microfoundations as a methodological justification for the extreme unrealism of the rational-expectations assumption, portraying it as nothing more than the consistent application of the rationality postulate underlying standard neoclassical price theory.

For the New Classicals, microfoundations became a reductionist crusade. There is only one kind of economics, and it is not macroeconomics. Even the idea that there could be a conceptual distinction between micro and macroeconomics was unacceptable to Robert Lucas, just as the idea that there is, or could be, a mind not reducible to the brain is unacceptable to some deranged neuroscientists. No science, not even chemistry, has been reduced to physics. Were it ever to be accomplished, the reduction of chemistry to physics would be a great scientific achievement. Some parts of chemistry have been reduced to physics, which is a good thing, especially when doing so actually enhances our understanding of the chemical process and results in an improved, or more exact, restatement of the relevant chemical laws. But it would be absurd and preposterous simply to reject, on supposed methodological principle, those parts of chemistry that have not been reduced to physics. And how much more absurd would it be to reject higher-level sciences, like biology and ecology, for no other reason than that they have not been reduced to physics.

But reductionism is what modern macroeconomics, under the New Classical hegemony, insists on. No exceptions allowed; don’t even ask. Meekly and unreflectively, modern macroeconomics has succumbed to the absurd and arrogant methodological authoritarianism of the New Classical Revolution. What an embarrassment.

UPDATE (11:43 AM EDST): I made some minor editorial revisions to eliminate some grammatical errors and misplaced or superfluous words.

John Cochrane on the Failure of Macroeconomics

The state of modern macroeconomics is not good; John Cochrane, professor of finance at the University of Chicago, senior fellow of the Hoover Institution, and adjunct scholar of the Cato Institute, writing in Thursday’s Wall Street Journal, thinks macroeconomics is a failure. Perhaps so, but he has trouble explaining why.

The problem that Cochrane is chiefly focused on is slow growth.

Output per capita fell almost 10 percentage points below trend in the 2008 recession. It has since grown at less than 1.5%, and lost more ground relative to trend. Cumulative losses are many trillions of dollars, and growing. And the latest GDP report disappoints again, declining in the first quarter.

Sclerotic growth trumps every other economic problem. Without strong growth, our children and grandchildren will not see the great rise in health and living standards that we enjoy relative to our parents and grandparents. Without growth, our government’s already questionable ability to pay for health care, retirement and its debt evaporate. Without growth, the lot of the unfortunate will not improve. Without growth, U.S. military strength and our influence abroad must fade.

Macroeconomists offer two possible explanations for slow growth: a) too little demand — correctable through monetary or fiscal stimulus — and b) structural rigidities and impediments to growth, for which stimulus is no remedy. Cochrane is not a fan of the demand explanation.

The “demand” side initially cited New Keynesian macroeconomic models. In this view, the economy requires a sharply negative real (after inflation) rate of interest. But inflation is only 2%, and the Federal Reserve cannot lower interest rates below zero. Thus the current negative 2% real rate is too high, inducing people to save too much and spend too little.

New Keynesian models have also produced attractively magical policy predictions. Government spending, even if financed by taxes, and even if completely wasted, raises GDP. Larry Summers and Berkeley’s Brad DeLong write of a multiplier so large that spending generates enough taxes to pay for itself. Paul Krugman writes that even the “broken windows fallacy ceases to be a fallacy,” because replacing windows “can stimulate spending and raise employment.”

If you look hard at New-Keynesian models, however, this diagnosis and these policy predictions are fragile. There are many ways to generate the models’ predictions for GDP, employment and inflation from their underlying assumptions about how people behave. Some predict outsize multipliers and revive the broken-window fallacy. Others generate normal policy predictions—small multipliers and costly broken windows. None produces our steady low-inflation slump as a “demand” failure.

Cochrane’s characterization of what’s wrong with New Keynesian models is remarkably superficial. Slow growth, according to the New Keynesian model, is caused by the real interest rate being insufficiently negative, with the nominal rate at zero and inflation at (less than) 2%. So what is the problem? True, the nominal rate can’t go below zero, but where is it written that the upper bound on inflation is (or must be) 2%? Cochrane doesn’t say. Not only doesn’t he say, he doesn’t even seem interested. It might be that something really terrible would happen if the rate of inflation rose about 2%, but if so, Cochrane or somebody needs to explain why terrible calamities did not befall us during all those comparatively glorious bygone years when the rate of inflation consistently exceeded 2% while real economic growth was at least a percentage point higher than it is now. Perhaps, like Fischer Black, Cochrane believes that the rate of inflation has nothing to do with monetary or fiscal policy. But that is certainly not the standard interpretation of the New Keynesian model that he is using as the archetype for modern demand-management macroeconomic theories. And if Cochrane does believe that the rate of inflation is not determined by either monetary policy or fiscal policy, he ought to come out and say so.

Cochrane thinks that persistent low inflation and low growth together pose a problem for New Keynesian theories. Indeed it does, but it doesn’t seem that a radical revision of New Keynesian theory would be required to cope with that state of affairs. Cochrane thinks otherwise.

These problems [i.e., a steady low-inflation slump, aka “secular stagnation”] are recognized, and now academics such as Brown University’s Gauti Eggertsson and Neil Mehrotra are busy tweaking the models to address them. Good. But models that someone might get to work in the future are not ready to drive trillions of dollars of public expenditure.

In other words, unless the economic model has already been worked out before a particular economic problem arises, no economic policy conclusions may be deduced from that economic model. May I call  this Cochrane’s rule?

Cochrane the proceeds to accuse those who look to traditional Keynesian ideas of rejecting science.

The reaction in policy circles to these problems is instead a full-on retreat, not just from the admirable rigor of New Keynesian modeling, but from the attempt to make economics scientific at all.

Messrs. DeLong and Summers and Johns Hopkins’s Laurence Ball capture this feeling well, writing in a recent paper that “the appropriate new thinking is largely old thinking: traditional Keynesian ideas of the 1930s to 1960s.” That is, from before the 1960s when Keynesian thinking was quantified, fed into computers and checked against data; and before the 1970s, when that check failed, and other economists built new and more coherent models. Paul Krugman likewise rails against “generations of economists” who are “viewing the world through a haze of equations.”

Well, maybe they’re right. Social sciences can go off the rails for 50 years. I think Keynesian economics did just that. But if economics is as ephemeral as philosophy or literature, then it cannot don the mantle of scientific expertise to demand trillions of public expenditure.

This is political rhetoric wrapped in a cloak of scientific objectivity. We don’t have the luxury of knowing in advance what the consequences of our actions will be. The United States has spent trillions of dollars on all kinds of stuff over the past dozen years or so. A lot of it has not worked out well at all. So it is altogether fitting and proper for us to be skeptical about whether we will get our money’s worth for whatever the government proposes to spend on our behalf. But Cochrane’s implicit demand that money only be spent if there is some sort of scientific certainty that the money will be well spent can never be met. However, as Larry Summers has pointed out, there are certainly many worthwhile infrastructure projects that could be undertaken, so the risk of committing the “broken windows fallacy” is small. With the government able to borrow at negative real interest rates, the present value of funding such projects is almost certainly positive. So one wonders what is the scientific basis for not funding those projects?

Cochrane compares macroeconomics to climate science:

The climate policy establishment also wants to spend trillions of dollars, and cites scientific literature, imperfect and contentious as that literature may be. Imagine how much less persuasive they would be if they instead denied published climate science since 1975 and bemoaned climate models’ “haze of equations”; if they told us to go back to the complex writings of a weather guru from the 1930s Dustbowl, as they interpret his writings. That’s the current argument for fiscal stimulus.

Cochrane writes as if there were some important scientific breakthrough made by modern macroeconomics — “the new and more coherent models,” either the New Keynesian version of New Classical macroeconomics or Real Business Cycle Theory — that rendered traditional Keynesian economics obsolete or outdated. I have never been a devote of Keynesian economics, but the fact is that modern macroeconomics has achieved its ascendancy in academic circles almost entirely by way of a misguided methodological preference for axiomatized intertemporal optimization models for which a unique equilibrium solution can be found by imposing the empirically risible assumption of rational expectations. These models, whether in their New Keynesian or Real Business Cycle versions, do not generate better empirical predictions than the old fashioned Keynesian models, and, as Noah Smith has usefully pointed out, these models have been consistently rejected by private forecasters in favor of the traditional Keynesian models. It is only the dominant clique of ivory-tower intellectuals that cultivate and nurture these models. The notion that such models are entitled to any special authority or scientific status is based on nothing but the exaggerated self-esteem that is characteristic of almost every intellectual clique, particularly dominant ones.

Having rejected inadequate demand as a cause of slow growth, Cochrane, relying on no model and no evidence, makes a pitch for uncertainty as the source of slow growth.

Where, instead, are the problems? John Taylor, Stanford’s Nick Bloom and Chicago Booth’s Steve Davis see the uncertainty induced by seat-of-the-pants policy at fault. Who wants to hire, lend or invest when the next stroke of the presidential pen or Justice Department witch hunt can undo all the hard work? Ed Prescott emphasizes large distorting taxes and intrusive regulations. The University of Chicago’s Casey Mulligan deconstructs the unintended disincentives of social programs. And so forth. These problems did not cause the recession. But they are worse now, and they can impede recovery and retard growth.

Where, one wonders, is the science on which this sort of seat-of-the-pants speculation is based? Is there any evidence, for example, that the tax burden on businesses or individuals is greater now than it was let us say in 1983-85 when, under President Reagan, the economy, despite annual tax increases partially reversing the 1981 cuts enacted in Reagan’s first year, began recovering rapidly from the 1981-82 recession?


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,260 other subscribers
Follow Uneasy Money on WordPress.com