Archive for September, 2012

Economy, Heal Thyself

Lately, some smart economists (Eli Dourado backed up by Larry White, George Selgin, and Tyler Cowen) have been questioning whether it is plausible, four years after the US economy was hit with a severe negative shock to aggregate demand, and about three and a half years since aggregate demand stopped falling (nominal GDP subsequently growing at about a 4% annual rate), that the reason for persistent high unemployment and anemic growth in real output is that nominal aggregate demand has been growing too slowly. Even conceding that the 4% growth in nominal GDP was too slow to generate a rapid recovery from the original shock, they still ask why almost four years after hitting bottom, we should assume that slow growth in real GDP and persistent high unemployment are the result of deficient aggregate demand rather than the result of some underlying real disturbance, such as a massive misallocation of resources and capital induced by the housing bubble from 2002 to 2006. In other words, even if it was an aggregated demand shock that caused a sharp downturn in 2008-09, and even if insufficient aggregate demand growth unnecessarily weakened and prolonged the recovery, what reason is there to assume that the economy could not, by now, have adjusted to a slightly lower rate of growth in nominal GDP 4% (compared to the 5 to 5.5% that characterized the period preceding the 2008 downturn). As Eli Dourado puts it:

If we view the recession as a purely nominal shock, then monetary stimulus only does any good during the period in which the economy is adjusting to the shock. At some point during a recession, people’s expectations about nominal flows get updated, and prices, wages, and contracts adjust. After this point, monetary stimulus doesn’t help.

Thus, Dourado,White, Selgin, and Cowen want to know why an economy not afflicted by some deep structural, (i.e. real) problems would not have bounced back to its long-term trend of real output and employment after almost four years of steady 4% nominal GDP growth. Four percent growth in nominal GDP may have been too stingy, but why should we believe that 4% nominal GDP growth would not, in the long run, provide enough aggregate demand to allow an eventual return to the economy’s long-run real growth path?  And if one concedes that a steady rate of 4% growth in nominal GDP would eventually get the economy back on its long-run real growth path, why should we assume that four years is not enough time to get there?

Well, let me respond to that question with one of my own: what is the theoretical basis for assuming that an economy subjected to a very significant nominal shock that substantially reduces real output and employment would ever recover from that shock and revert back to its previous growth path? There is, I suppose, a presumption that markets equilibrate themselves through price adjustments, prices adjusting in response to excess demands and supplies until markets again clear. But there is a fallacy of composition at work here. Supply and demand curves are always drawn for a single market. The partial-equilibrium analysis that we are taught in econ 101 operates based on the implicit assumption that all markets other than the one under consideration are in equilibrium. (That is actually a logically untenable assumption, because, according to Walras’s Law, if one market is out of equilibrium at least one other market must also be out of equilibrium, but let us not dwell on that technicality.) But after an economy-wide nominal shock, the actual adjustment process involves not one market, but many (if not most, or even all) markets are out of equilibrium. When many markets are out of equilibrium, the adjustment process is much more problematic than under the assumptions of the partial-equilibrium analysis that we are so accustomed to. Just because the adjustment process that brings a single isolated market back from disequilibrium to equilibrium seems straightforward, we are not necessarily entitled to assume that there is an equivalent adjustment process from an economy-wide disequilibrium in which many, most, or all, markets are starting from a position of disequilibrium. A price adjustment in any one market will, in general, affect demands and supplies in at least some other markets. If only a single market is out of equilibrium, the effects on other markets of price and quantity adjustment in that one market are likely to be small enough, so that those effects on other markets can be safely ignored. But when many, most, or all, markets are in disequilibrium, the adjustments in some markets may aggravate the disequilibrium in other markets, setting in motion an endless series of adjustments that may – but may not! — lead the economy back to equilibrium. We just don’t know. And the uncertainty about whether equilibrium will be restored becomes even greater, when one of the markets out of equilibrium is the market for labor, a market in which income effects are so strong that they inevitably have major repercussions on all other markets.

Dourado et al. take it for granted that people’s expectations about nominal flows get updatd, and that prices, wages, and contracts adjust. But adjustment is one thing; equilibration is another. It is one thing to adjust expectations about a market in disequilibrium when all or most markets ar ein or near equilibrium; it is another to adjust expectations when markets are all out of equilibrium. Real interest rates, as very imperfectly approximated by TIPS, seem to have been falling steadily since early 2011 reflecting increasing pessimism about future growth in the economy. To overcome the growing entrepreneurial pessimism underlying the fall in real interest rates, it would have been necessary for workers to have accepted wage cuts far below their current levels. That scenario seems wildly unrealistic under any conceivable set of conditions. But even if the massive wage cuts necessary to induce a substantial increase in employment were realistic, wage cuts of that magnitude could have very unpredictable repercussions on consumption spending and prices, potentially setting in motion a destructive deflationary spiral. Dourado assumes that updating expectations about nominal flows, and the adjustments of prices and wages and contracts lead to equilibrium – that the short run is short. But that is question begging no less than those who look at slow growth and high unemployment and conclude that the economy is operating below its capacity. Dourado is sure that the economy has to return to equilibrium in a finite period of time, and I am sure that if the economy were in equilibrium real output would be growing at least 3% a year, and unemployment would be way under 8%. He has no more theoretical ground for his assumption than I do for mine.

Dourado challenges supporters of further QE to make “a broadly falsifiable claim about how long the short run lasts.” My response is that there is no theory available from which to deduce such a falsifiable claim. And as I have pointed out a number of times, no less an authority than F. A. Hayek demonstrated in his 1937 paper “Economics and Knowledge” that there is no economic theory that entitles us to conclude that the conditions required for an intertemporal equilibrium are in fact ever satisfied, or even that there is a causal tendency for them to be satisfied. All we have is some empirical evidence that economies from time to time roughly approximate such states. But that certainly does not entitle us to assume that any lapse from such a state will be spontaneously restored in a finite period of time.

Do we know that QE will work? Do we know that QE will increase real growth and reduce unemployment? No, but we do have a lot of evidence that monetary policy has succeeded in increasing output and employment in the past by changing expectations of the future price-level path. To assume that the current state of the economy is an equilibrium when unemployment is at a historically high level and inflation at a historically low level seems to me just, well, irresponsible.

So Many QE-Bashers, So Little Time

Both the Financial Times and the Wall Street Journal have been full of articles and blog posts warning of the ill-effects of QE3. In my previous post, I discussed the most substantial of the recent anti-QE discussions. I was going to do a survey of some of the others that I have seen, but today all I can manage is a comment on one of them.

In the Wall Street Journal, Benn Steil, director of international economics at the Council of Foreign Relations, winner of the 2010 Hayek Book Award for his book Money, Markets, and Sovereignty (co-authored with Manuel Hinds), and Dinah Walker, an analyst at the CFR, complain that since 2000, the Fed has stopped following the Taylor Rule, to which it supposedly adhered from 1987 to 1999 during a period of exceptional monetary stability, and, from 2000 to the present, the Fed supposedly abandoned the rule. This is a familiar argument endlessly repeated by none other than John Taylor, himself. But as I recently pointed out, Taylor has implicitly at least, conceded that the supposedly non-discretionary, uncertainty-minimizing, Taylor rule comes in multiple versions, and, notwithstanding Taylor’s current claim that he prefers the version that he originally proposed in 1993, he is unable to provide any compelling reason – other than his own exercise of discretion — why that version is entitled to any greater deference than alternative versions of the rule.

Despite the inability of the Taylor rule to specify a unique value, or even a narrow range of values, of the target for the Fed Funds rate, Steil and Walker, presumably taking Taylor’s preferred version as canonical, make the following assertion about the difference between how the Fed Funds rate was set in the 1987-99 period compared how it was set in the 2000-08 period.

Between 1987, when Alan Greenspan became Fed chairman, and 1999 a neat approximation of how the Fed responded to market signals was captured by the Taylor Rule. Named for John Taylor, the Stanford economist who introduced the rule in 1993, it stipulated that the fed-funds rate, which banks use to set interest rates, should be nudged up or down proportionally to changes in inflation and economic output. By our calculations, the Taylor Rule explained 69% of the variation in the fed-funds rate over that period. (In the language of statistics, the relationship between the rule and the rate had an R-squared of .69.)

Then came a dramatic change. Between 2000 and 2008, when the Fed cut the fed-funds target rate to near zero, the R-squared collapsed to .35. The Taylor Rule was clearly no longer guiding U.S. monetary policy.

This is a pretty extravagant claim. The 1987-99 period was marked by a single recession, a recession triggered largely by a tightening of monetary policy when inflation was rising above the 3.5 to 4 percent range that was considered acceptable after the Volcker disinflation in the early 1980s. So the 1992 recession was triggered by the application of Taylor rule, and the recession triggered a response that was consistent with the Taylor rule. The 2000-08 period was marked by two recessions, both of which were triggered by financial stresses, not rising inflation.  To say that the Fed abandoned a rule that it was following in the earlier period is simply to say that circumstances that the Fed did not have to face in the 1987-99 period confronted the Fed in the 2000-08 period. The difference in the R-squared found by Steil and Watson may indicate no more than the more variable economic environment in the latter period than the former.

As I pointed out in my recent post (hyper-linked above) on the multiple Taylor rules, following the Taylor rule in 2008 would have meant targeting the Fed Funds rate for most of 2008 at an even higher level than the disastrously high rate that the Fed was targeting in 2008 while the economy was already in recession and entering, even before the Lehman debacle, one of the sharpest contractions since World War II. Indeed, Taylor’s preferred version implied that the Fed should have increased (!) the Fed Funds rate in the spring of 2008.

Steil and Watkins attribute the Fed’s deviation from the Taylor rule to an implicit strategy of targeting asset prices.

In a now-famous speech invoking the analogy of a “helicopter drop of money,” [Bernanke] argued that monetary interventions that boosted asset values could help combat deflation risk by lowering the cost of capital and improving the balance sheets of potential borrowers.

Mr. Bernanke has since repeatedly highlighted asset-price movements as a measure of policy success. In 2003 he argued that “unanticipated changes in monetary policy affect stock prices . . . by affecting the perceived riskiness of stocks,” suggesting an explicit reason for using monetary policy to affect the public’s appetite for stocks. And this past February he noted that “equity prices [had] risen significantly” since the Fed began reinvesting maturing securities.

This is a tendentious misreading of Bernanke’s statements. He is not targeting stock prices, but he is arguing that movements in stock prices are correlated with expectations about the future performance of the economy, so that rising stock prices in response to a policy decision of the Fed provide some evidence that the policy has improved economic conditions. Why should that be controversial?

Steil and Watkins then offer a strange statistical “test” of their theory that the Fed is targeting stock prices.

Between 2000 and 2008, the level of household risk aversion—which we define as the ratio of household currency holdings, bank deposits and money-market funds to total household financial assets—explained a remarkable 77% of the variation in the fed-funds rate (an R-squared of .77). In other words, the Fed was behaving as if it were targeting “risk on, risk off,” moving interest rates to push investors toward or away from risky assets.

What Steil and Watkins are measuring by their “ratio of household risk aversion” is liquidity preference or the demand for money. They seem to have a problem with the Fed acting to accommodate the public’s demand for liquidity. The alternative to the Fed’s accommodating a demand for liquidity is to have that demand manifested in deflation. That’s what happened in 1929-33, when the Fed deliberately set out to combat stock-market speculation by raising interest rates until the stock market crashed, and only then reduced rates to 2%, which, in an environment of rapidly falling prices, was still a ferociously tight monetary policy. The .77 R-squared that Steil and Watkins find reflects the fact, for which we can all be thankful, that the Fed has at least prevented a deflationary catastrophe from overtaking the US economy.

The fact is that what mainly governs the level of stock prices is expectations about the future performance of the economy. If the Fed takes seriously its dual mandate, then it is necessarily affecting the level of stock prices. That is something very different from the creation of a “Bernanke put” in which the Fed is committed to “ease monetary policy whenever there is a stock market correction.” I don’t know why some people have a problem understanding the difference.  But they do, or at least act as if they do.

Bullard Defends the Indefensible

James Bullard, the President of the St. Louis Federal Reserve Bank, is a very fine economist, having worked his way up the ranks at the St. Louis Fed after joining the research department at the St. Louis Fed in 1990, as newly minted Ph. D. from Indiana University, publishing his research widely in leading journals (and also contributing an entry on “learning” to Business Cycles and Depressions: An Encyclopedia which I edited). Bullard may just be the most centrist member of the FOMC (see here), and his pronouncements on monetary policy are usually measured and understated, eschewing the outspoken style of some of his colleagues (especially the three leading inflation hawks on the FOMC, Charles Plosser, Jeffrey Lacker, and Richard Fisher).

But even though Bullard is a very sensible and knowledgeable guy, whose views I take seriously, I am having a lot of trouble figuring out what he was up to in the op-ed piece he published in today’s Financial Times (“Patience needed for Fed’s dual mandate”) in which he argued that the fact that the Fed has persistently undershot its inflation target while unemployment has been way over any reasonable the rate consistent with full employment, is no reason for the Fed to change its policy toward greater ease.  In other words, Bullard sees no reason why the Fed should now  seek, or at least tolerate, an inflation rate that temporarily meets or exceeds the Fed’s current 2% target. In a recent interview, Bullard stated that he would not have supported the decision to embark on QE3.

To support his position, Bullard cites a 2007 paper in the American Economic Review by Smets and Wouters “Shocks and Frictions in US Business Cycles.” The paper estimates a DSGE model of the US economy and uses it to generate out-of-sample predictions that are comparable to those of a Bayesian vector autoregression model. Here’s how Bullard characterizes the rationale for QE3 and explains how that rationale is undercut by the results of the Smets and Wouters paper.

The Fed has a directive that calls for it to maintain stable prices as well as maximum employment, along with moderate long-term interest rates. Since unemployment is high by historical standards (8.1 per cent), observers argue the Fed must not be “maximising employment”. Inflation, as measured by the personal consumption expenditures deflator price index, has increased to about 1.3 per cent in the year to July. The Fed’s target is 2 per cent, so critics can say the Fed has not met this part of the mandate. When unemployment is above the natural rate, they say, inflation should be above the inflation target, not below.

I disagree. So does the economic literature. Here is my account of where we are: the US economy was hit by a large shock in 2008 and 2009. This lowered output and employment far below historical trend levels while reducing inflation substantially below 2 per cent. The question is: how do we expect these variables to return to their long-run or targeted values under monetary policy? That is, should the adjustment path be relatively smooth, or should we expect some overshooting?

Evidence, for example a 2007 paper by Frank Smets and Raf Wouters, suggests that it is reasonable to believe that output, employment and inflation will return to their long-run or targeted values slowly and steadily. In the jargon, we refer to this type of convergence as “monotonic”: a shock knocks the variables off their long-run values but they gradually return, without overshooting on the other side. Wild dynamics would be disconcerting.

What is wrong with Bullard’s argument? Well, just because Smets and Wouters estimated a DSGE model in 2007 that they were able to use to generate “good” out-of-sample predictions does not prove that the model would generate good out-of-sample predictions for 2008-2012. Maybe it does, I don’t know. But Bullard is a very smart economist, and he has a bunch of very smart economists economists working for him. Have they used the Smets and Wouters DSGE model to generate out-of-sample predictions for 2008 to 2012? I don’t know. But if they have, why doesn’t Bullard even mention what they found?

Bullard says that the Smets and Wouters paper “suggests that it is reasonable to believe that output, employment and inflation will return to their long-run or targeted values slowly and steadily.” Even if we stipulate to that representation of what the paper shows, that is setting the bar very low. Bullard’s representation calls to mind a famous, but often misunderstood, quote by a dead economist.

The long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is past the ocean is flat again.

Based on a sample that included no shock to output, employment, and inflation of comparable magnitude to the shock experienced in 2008-09, Bullard is prepared to opine confidently that we are now on a glide path headed toward the economy’s potential output, toward full employment, and toward 2% inflation.  All we need is patience. But Bullard provides no evidence, not even a simulation based on the model that he says that he is relying on, that would tell us how long it will take to reach the end state whose realization he so confidently promises.  Nor does he provide any evidence, not even a simulation based on the Smets-Wouters model — a model that, as far as I know, has not yet achieved anything like canonical status — estimating what the consequences of increasing the Fed’s inflation target would be, much less the consequences of changing the policy rule underlying Smets-Wouters model from inflation targeting to something like a price-level target or a NGDP target. And since the Lucas Critique tells us that a simulation based on a sample in which one policy rule was being implemented cannot be relied upon to predict the consequences of adopting a different policy rule from that used in the original estimation, I have no idea how Bullard can be so confident about what the Smets and Wouters paper can teach us about adopting QE3.

PS  In the comment below Matt Rognlie, with admirable clearness and economy, fleshes out my intuition that the Smets-Wouters paper provides very little empirical support for the proposition that Bullard is arguing for in his FT piece.  Many thanks and kudos to Matt for his contribution.

Taylor Rules?

John Taylor recently had a post on his blog with the accompanying graph showing the actual Fed Funds rate target of the Fed since 2005 and the Fed Funds rate implied by two versions of the Taylor rule, one that he specifically proposed and another used in a study by Janet Yellen that Taylor, in a 1999 paper, had mentioned as a possible alternative version of his rule. Taylor has subsequently tried to put some distance between himself and the alternative version, the alternative version implying a far lower optimal interest-rate target than the version that he now professes to prefer.  But while not explicitly endorsing it when first mentioning it as an alternative, neither did Taylor express any reservations about the alternative, providing no hint that he considered it to be inconsistent with the spirit of his rule or to be obviously inferior to his own previous version, for which he now insists he has a preference.

What I find especially noteworthy, aside from the remarkable fact that, as Scott Sumner noted, Taylor’s preferred rule would have called for a rate increase in early 2008, when the economy was already in recession, and on the verge of one of the sharpest one-quarter declines in real GDP on record, in the third quarter of 2008 even before the Lehman panic of September-October, is that both versions of the Taylor rule implied a target interest rate substantially higher than the Fed Funds rate actually in effect for  most of 2008. So Taylor is implicitly endorsing a far tighter monetary policy in 2008, after the economy had already entered a recession and started a rapid contraction, than the disastrously tight policy to which the economy was then being subjected by the FOMC.

Now, in fairness to Taylor, he could argue that the difficulties all stemmed from the prolonged period of very low interest rates following the 2001 recession. But that simply underscores the inherent unworkability of a mechanical rule of the type that Taylor is so enamored by. Conditions are rarely ideal, so you can never be sure that the interest rate implied by the Taylor rule (of whichever version) is preferable to the rate chosen at the discretion of the monetary authority. In retrospect, some of the time the FOMC seems to have done better than the Taylor rules, and some of the time one or both of the Taylor rules seem to have done better than the FOMC. Not exactly an overwhelmingly good performance. So why should anyone assume that adopting the Taylor rule would be an improvement, all things considered, over the exercise of discretion?

Taylor wants to argue that the exercise of discretion is bad in and of itself. But which is The Taylor rule? Taylor likes one version of the rule, but he can’t provide any argument that the Taylor rule that he prefers is better than the one that he now says that he doesn’t prefer, though no such preference was expressed when he first mentioned the alternative version. And even now, though he claims to like one version better than the other, he can only conclude his post by saying that more research on the relative merits of the rules is necessary. In other words, adopting the Taylor rule is not sufficient to eliminate policy uncertainty, as the gap in the diagram between the rates implied by the two rules clearly indicates.

The upshot of all this is just that for Taylor to suggest that adopting his rule would somehow reduce policy uncertainty when there is clearly no way to specify the parameters necessary to generate a predictable value for the interest rate target implied by the rule is simply disingenuous.  Moreover, to suggest that there is any evidence that following the Taylor rule (whatever such a vague and imprecise concept can possibly mean) would have led to better outcomes than the not very impressive performance of the FOMC is just laughable.

PS This will be my last post until next week after the Jewish New Year. My best wishes go out to all for a happy, healthy, and peaceful New Year.

Two Cheers for Ben

I admit that I have not been kind to Ben Bernanke. And although I have never met him, he seems like a very nice man, and I think that I would probably like Mr. Bernanke if I knew him. So, aside from my pleasure at seeing a concrete step taken toward recovery, I am happy to be able to say something nice about Mr. Bernanke for a change. And it’s not just me, obviously the stock market has also been pleased by Mr. Bernanke’s performance of late, and especially today.

Almost three months ago, I wrote a post in which I complained when the FOMC in its statement described a weakening economic recovery and falling inflation, already less than the Fed’s target, with no sense of urgency about improving the economic situation and ensuring that inflation would not continually fail to reach even the stingy and inadequate target that the Fed had set for itself.  A few days later, I voiced alarm that inflation expectations were falling rapidly, suggesting the risk of another financial crisis. The crisis did not come to pass, and Bernanke’s opaque ambiguity about policy, combined with an explicit acknowledgment of a weakening economy, provided some Fed watchers with grounds for hope that the Fed might be considering a change in policy. But in his testimony to Congress in July, Bernanke declined to offer any reassurance that a change of policy was in the offing, a wasted opportunity that I strongly criticized. However, in its August meeting, the FOMC finally gave a clear signal that it was dissatisfied with the current situation, and would take steps to change the policy in September unless clear signs of a strengthening recovery emerged that would indicate that no change of policy was necessary to get a recovery started.  By late July and early August, the perception that the Fed was moving toward a change in policy led to a mini-rally even before the August FOMC meeting.

Thus, the entire summer can be viewed as a gradual build up to today’s announcement. From early July until today, inflation expectations, as approximated by the 10-year breakeven spread between 10-year Treasuries and 10-year TIPS, have been gradually rising as have stock prices. And today, the 10-year breakeven spread increased by 11 basis points, while the S&P 500 rose by almost 2%, the gains coming almost entirely after release of the FOMC statement shortly after 12PM this afternoon. Since early July, the 10-year breakeven spread has increased by 38 basis points, and the S&P 500 has risen by 9%.

The accompanying chart tracks the 10-year breakeven TIPS spread and the S&P 500 between July 12 and September 13 (both series normalized to be 100 on July 12). The correlation coefficient between the two series is 92.5%.

To provide a bit more perspective on what the increase in stock prices means, let me also note that today’s close of the S&P 500 was 1459.99. That is still about 100 points below the all-time high of the S&P 500, reached almost 5 years ago in October 2007. If the S&P 500 had increased modestly at about a 5% annual rate, the S&P 500 would now be in the neighborhood of 2000, so the S&P 500, even after more than doubling since it bottomed out in March 2009, may be less than 75% of the level it would be at if the economy were performing near capacity. To suggest that the S&P 500 is now overvalued – just another bubble — as critics of further QE have asserted, doesn’t seem even remotely reasonable.

So Mr. Bernanke had a very good day today. Let’s hope it’s the start of a trend of good decision-making, and not just a fluke.

The Wisdom of David Laidler

Michael Woodford’s paper for the Jackson Hole Symposium on Monetary Policy wasn’t the only important paper on monetary economics to be posted on the internet last month. David Laidler, perhaps the world’s greatest expert on the history of monetary theory and macroeconomics since the time of Adam Smith, has written an important paper with the somewhat cryptic title, “Two Crises, Two Ideas, and One Question.” Most people will figure out pretty quickly which two crises Laidler is referring to, but you will have to read the paper in order to figure out which two ideas and which question, Laidler has on his mind. Actually, you won’t have to read the paper if you keep reading this post, because I am about to tell you. The two ideas are what Laidler calls the “Fisher relation” between real and nominal interest rates, and the idea of a lender of last resort. The question is whether a market economy is inherently stable or unstable.

How does one weave these threads into a coherent narrative? Well, to really understand that you really will just have to read Laidler’s paper, but this snippet from the introduction will give you some sense of what he is up to.

These two particular ideas are especially interesting, because in the 1960s and ’70s, between our two crises, they feature prominently in the Monetarist reassessment of the Great Depression, which helped to establish the dominance in macroeconomic thought of the view that, far from being a manifestation of deep flaws in the very structure of the market economy, as it had at first been taken to be, this crisis was the consequence of serious policy errors visited upon an otherwise robustly self-stabilizing system. The crisis that began in 2007 has re-opened this question.

The Monetarist counterargument to the Keynesian view that the market economy is inherently subject to wide fluctuations and has no strong tendency toward full employment was that the Great Depression was caused primarily by a policy shock, the failure of the Fed to fulfill its duty to act as a lender of last resort during the US financial crisis of 1930-31. Originally, the Fisher relation did not figure prominently in this argument, but it eventually came to dominate Monetarism and the post-Monetarist/New Keynesian orthodoxy in which the job of monetary policy was viewed as setting a nominal interest rate (via a Taylor rule) that would be consistent with expectations of an almost negligible rate of inflation of about 2%.

This comfortable state of affairs – Monetarism without money is how Laidler describes it — in which an inherently stable economy would glide along its long-run growth path with low inflation, only rarely interrupted by short, shallow recessions, was unpleasantly overturned by the housing bubble and the subsequent financial crisis, producing the steepest downturn since 1937-38. That downturn has posed a challenge to Monetarist orthodoxy inasmuch as the sudden collapse, more or less out of nowhere in 2008, seemed to suggest that the market economy is indeed subject to a profound instability, as the Keynesians of old used to maintain. In the Great Depression, Monetarists could argue, it was all, or almost all, the fault of the Federal Reserve for not taking prompt action to save failing banks and for not expanding the money supply sufficiently to avoid deflation. But in 2008, the Fed provided massive support to banks, and even to non-banks like AIG, to prevent a financial meltdown, and then embarked on an aggressive program of open-market purchases that prevented an incipient deflation from taking hold.

As a result, self-identifying Monetarists have split into two camps. I will call one camp the Market Monetarists, with whom I identify even though I am much less of a fan of Milton Friedman, the father of Monetarism, than most Market Monetarists, and, borrowing terminology adopted in the last twenty years or so by political conservatives in the US to distinguish between old-fashioned conservatives and neoconservatives, I will call the old-style Monetarists, paleo-Monetarists. The paelo-Monetarists are those like Alan Meltzer, the late Anna Schwartz, Thomas Humphrey, and John Taylor (a late-comer to Monetarism who has learned quite well how to talk to the Monetarist talk). For the paleo-Monetarists, in the absence of deflation, the extension of Fed support to non-banking institutions and the massive expansion of the Fed’s balance sheet cannot be justified. But this poses a dilemma for them. If there is no deflation, why is an inherently stable economy not recovering? It seems to me that it is this conundrum which has led paleo-Monetarists into taking the dubious position that the extreme weakness of the economic recovery is a consequence of fiscal and monetary-policy uncertainty, the passage of interventionist legislation like the Affordable Health Care Act and the Dodd-Frank Bill, and the imposition of various other forms of interventionist regulations by the Obama administration.

Market Monetarists, on the other hand, have all along looked to monetary policy as the ultimate cause of both the downturn in 2008 and the lack of a recovery subsequently. So, on this interpretation, what separates paleo-Monetarists from Market Monetarists is whether you need outright deflation in order to precipitate a serious malfunction in a market economy, or whether something less drastic can suffice. Paleo-Monetarists agree that Japan in the 1990s and even early in the 2000s was suffering from a deflationary monetary policy, a policy requiring extraordinary measures to counteract. But the annual rate of deflation in Japan was never more than about 1% a year, a far cry from the 10% annual rate of deflation in the US between late 1929 and early 1933. Paleo-Monetarists must therefore explain why there is a radical difference between 1% inflation and 1% deflation. Market Monetarists also have a problem in explaining why a positive rate of inflation, albeit less than the 2% rate that is generally preferred, is not adequate to sustain a real recovery from starting more than four years after the original downturn. Or, if you prefer, the question could be restated as why a 3 to 4% rate of increase in NGDP is not adequate to sustain a real recovery, especially given the assumption, shared by paleo-Monetarists and Market Monetarists, that a market economy is generally stable and tends to move toward a full-employment equilibrium.

Here is where I think Laidler’s focus on the Fisher relation is critically important, though Laidler doesn’t explicitly address the argument that I am about to make. This argument, which I originally made in my paper “The Fisher Effect under Deflationary Expectations,” and have repeated in several subsequent blog posts (e.g., here) is that there is no specific rate of deflation that necessarily results in a contracting economy. There is plenty of historical experience, as George Selgin and others have demonstrated, that deflation is consistent with strong economic growth and full employment. In a certain sense, deflation can be a healthy manifestation of growth, allowing that growth, i.e., increasing productivity of some or all factors of production, to be translated into falling output prices. However, deflation is only healthy in an economy that is growing because of productivity gains. If productivity is flagging, there is no space for healthy (productivity-driven) deflation.

The Fisher relation between the nominal interest rate, the real interest rate and the expected rate of deflation basically tells us how much room there is for healthy deflation. If we take the real interest rate as given, that rate constitutes the upper bound on healthy deflation. Why, because deflation greater than real rate of interest implies a nominal rate of interest less than zero. But the nominal rate of interest has a lower bound at zero. So what happens if the expected rate of deflation is greater than the real rate of interest? Fisher doesn’t tell us, because in equilibrium it isn’t possible for the rate of deflation to exceed the real rate of interest. But that doesn’t mean that there can’t be a disequilibrium in which the expected rate of deflation is greater than the real rate of interest. We (or I) can’t exactly model that disequilibrium process, but whatever it is, it’s ugly. Really ugly. Most investment stops, the rate of return on cash (i.e., expected rate of deflation) being greater than the rate of return on real capital. Because the expected yield on holding cash exceeds the expected yield on holding real capital, holders of real capital try to sell their assets for cash. The only problem is that no one wants to buy real capital with cash. The result is a collapse of asset values. At some point, asset values having fallen, and the stock of real capital having worn out without being replaced, a new equilibrium may be reached at which the real rate will again exceed the expected rate of deflation. But that is an optimistic scenario, because the adjustment process of falling asset values and a declining stock of real capital may itself feed pessimistic expectations about the future value of real capital so that there literally might not be a floor to the downward spiral, at least not unless there is some exogenous force that can reverse the downward spiral, e.g., by changing price-level expectations.  Given the riskiness of allowing the rate of deflation to come too close to the real interest rate, it seems prudent to keep deflation below the real rate of interest by a couple of points, so that the nominal interest rate doesn’t fall below 2%.

But notice that this cumulative downward process doesn’t really require actual deflation. The same process could take place even if the expected rate of inflation were positive in an economy with a negative real interest rate. Real interest rates have been steadily falling for over a year, and are now negative even at maturities up to 10 years. What that suggests is that ceiling on tolerable deflation is negative. Negative deflation is the same as inflation, which means that there is a lower bound to tolerable inflation.  When the economy is operating in an environment of very low or negative real rates of interest, the economy can’t recover unless the rate of inflation is above the lower bound of tolerable inflation. We are not in the critical situation that we were in four years ago, when the expected yield on cash was greater than the expected yield on real capital, but it is a close call. Why are businesses, despite high earnings, holding so much cash rather than using it to purchase real capital assets? My interpretation is that with real interest rates negative, businesses do not see a sufficient number of profitable investment projects to invest in. Raising the expected price level would increase the number of investment projects that appear profitable, thereby inducing additional investment spending, finally inducing businesses to draw down, rather than add to, their cash holdings.

So it seems to me that paleo-Monetarists have been misled by a false criterion, one not implied by the Fisher relation that has become central to Monetarist and Post-Monetarist policy orthodoxy. The mere fact that we have not had deflation since 2009 does not mean that monetary policy has not been contractionary, or, at any rate, insufficiently expansionary. So someone committed to the proposition that a market economy is inherently stable is not obliged, as the paleo-Monetarists seem to think, to take the position that monetary policy could not have been responsible for the failure of the feeble recovery since 2009 to bring us back to full employment. Whether it even makes sense to think about an economy as being inherently stable or unstable is a whole other question that I will leave for another day.

HT:  Lars Christensen

John Cochrane Misunderestimates the Fed

In my previous post, I criticized Ben Bernanke’s speech last week at the annual symposium on monetary policy at Jackson Hole, Wyoming. It turns out that the big event at the symposium was not Bernanke’s speech but a 98-page paper by Michael Woodford, of Columbia University. Woodford’s paper was important, because he is widely considered the world’s top monetary theorist, and he endorsed the idea proposed by the intrepid, indefatigable and indispensable Scott Sumner that the Fed stop targeting inflation and instead target a steady growth path of nominal GDP. That endorsement constitutes a rather stunning turn of events in which Sumner’s idea (OK, Scott didn’t invent the idea, but he made a big deal out of it when nobody else was paying any attention) has gone from being a fringe idea to the newly emerging orthodoxy in monetary economics.

John Cochrane, however, is definitely not with the program, registering his displeasure in a blog post earlier this week. In this post, I am going to challenge two assertions that Cochrane makes. These aren’t the only ones that could be challenged, but it’s getting late.  The first assertion is that inflation can never bring about an increase in output.

Mike [Woodford]‘s enthusiasm for deliberate inflation is even more puzzling to me.  Mike uses the word “stimulus,” never differentiating between real and nominal stimulus. Surely, we don’t want to cook up some inflation just for its own sake — we want to cook up some inflation because we think it will goose output. But why? Why especially will increasing expected inflation help? Because that is the aim of all the policies under discussion here — promising to keep rates low even once inflation rises, adopting “nominal GDP targets,” helicopter drops, or similar policies such as raising the inflation target.

I don’t put much faith in Phillips curves to start with  — the idea that deliberate inflation raises output. I put less faith in the idea floating around Jackson hole that a little inflation will set us permanently back on the trend line, not just be a little sugar rush and then back to sclerosis.

But it’s a rare Phillips curve in which raising expected inflation is a good thing.  It just gives you more inflation, with if anything less output and employment.

Cochrane is simply asserting that expected inflation cannot increase output and employment. The theoretical basis for that proposition is an argument, generally attributed to Milton Friedman and Edward Phelps, but advanced by others before them, that an increase in inflation cannot generate a permanent increase in employment. The problem with that theoretical argument is that it is a comparative statics result, thus, by assumption, starting from an initial equilibrium with zero inflation and positing an increase in the inflation parameter. The Friedman-Phelps argument shows that a new equilibrium corresponding to the higher rate of inflation has the same level of output and employment as the initial zero-inflation equilibrium, so that derivatives of output and employment with respect to inflation are both zero. That comparative-statics exercise is fine, but it’s irrelevant to the situation we have been in since 2008. We are not starting from equilibrium; we are starting from a disequlibrium in which output and employment are well below their equilibrium levels. The question is whether an increase in inflation, starting from an under-employment disequilibrium, would increase output and employment. The Friedman/Phelps argument tells us exactly nothing about that issue.

And aside from the irrelevance of the theoretical argument on which Cochrane is relying to the question whether inflation can reduce unemployment when employment is below its equilibrium level – I am here positing that it is possible for employment to be persistently below its equilibrium level – there is also the clear historical evidence that in 1933 a sharp increase in the US price level, precipitated by FDR’s devaluation of the dollar, produced a spectacular increase in output and employment between April and July of 1933 — the fastest four-month expansion of output and employment, combined with a doubling of the Dow-Jones Industrial Average, in US history. The increase in the price level, since it was directly tied to a very public devaluation of the dollar, and an explicit policy objective, announced by FDR, of raising the US price level back to where it had been in 1926, could hardly have been unanticipated.

The second assertion made by Cochrane that I want to challenge is the following.

Nothing communicates like a graph. Here’s Mike [Woodford]‘s, which will help me to explain the view:

The graph is nominal GDP and the trend through 2007 extrapolated. (Nominal GDP is price times quantity, so goes up with either inflation or larger real output.)

Now, let’s be clear what a nominal GDP target is and is and is not. Many people (and a few persistent commenters on this blog!) urge nominal GDP targeting by looking at a graph like this and saying “see, if the Fed had kept nominal GDP on trend, we wouldn’t have had  such a huge recession. Sure, part of it might have been more inflation, but surely part of a steady nominal GDP would have been less recession.” This is NOT what Mike is talking about.

Mike recognizes, as I do, that the Fed can do nothing more to raise nominal GDP today. Rates are at zero. The Fed has did [sic] what it could. The trend line was not achievable.

Nick Rowe, in his uniquely simple and elegant style, has identified the fallacy at work in Woodford’s and Cochrane’s view of monetary policy which views the short-term interest rate as the exclusive channel by which monetary policy can work. Thus, when you reach the zero lower bound, you (i.e., the central bank) have become impotent. That’s just wrong, as Nick demonstrates.

Rather than restate Nick’s argument, let me add some historical context. The discovery that the short-term interest rate set by the central bank is the primary tool of monetary policy was not made by Michael Woodford; it goes back to Henry Thornton, at least. It was a commonplace of nineteenth-century monetary orthodoxy. Except that in those days, the bank rate, as the English called it, was viewed as the instrument by which the Bank of England could control the level of its gold reserves, not the overall state of the economy, for which the Bank of England had no legal responsibility. It was Knut Wicksell who, at the end of the nineteenth century, first advocated using the bank rate as a tool for controlling the price level and thus the business cycle. J. M. Keynes and Dennis Robertson also advocated using the bank rate as an instrument for controlling the price level and the business cycle, but the most outspoken and emphatic exponent of using the bank rate as an instrument of macroeconomic control was Ralph Hawtrey. Keynes continued to advocate using the bank rate until the early 1930s, but he then began to advocate fiscal policy and public works spending as the primary weapon against unemployment. Hawtrey never wavered in his advocacy of the bank rate as a control mechanism, but even he acknowledged that could be circumstances under which reducing the bank rate might not be effective in stimulating the economy. Here’s how R. D. C. Black, in a biographical essay on Hawtrey, described Hawtrey’s position:

It was always a corollary of Hawtrey’s analysis that the economy, although lacking any automatic stabilizer, could nevertheless be effectively stabilized by the proper use of credit policy; it followed that fiscal policy in general and public works in particular constituted an unnecessary and inappropriate control mechanism. Yet Hawtrey was always prepared to admit that there could be circumstances in which no conceivable easing of credit would induce traders to borrow more and that in such a case government expenditure might be the only means of increasing employment.

This possibility of such a “credit deadlock” was admitted in all Hawtrey’s writings from Good and Bad Trade onwards, but treated as a most unlikely exceptional case. ln Capital and Emþloyment, however, he admitted “that unfortunately since 1930 it has come to plague the world, and has confronted us with problems which have threatened the fabric of civilisation with destruction.”

So indeed it had, and in the years that followed opinion, both academic and political, became increasingly convinced that the solution lay in the methods of stabilization by fiscal policy which followed from Keynes’s theories rather that in those of stabilization by credit policy which followed from Hawtrey’s.

However, a few paragraphs later, Black observes that Hawtrey understood that monetary policy could be effective even in a credit deadlock when reducing the bank rate would accomplish nothing.

Hawtrey was inclined to be sympathetic when Roosevelt adopted the so-called “Warren plan” and raised the domestic price of gold. Despairing of seeing effective international cooperation to raise and stabilize the world price level, Hawtrey now envisaged exchange depreciation as the only way in which a country like the United States could “break the credit deadlock by making some branches of economic activity remunerative.” Not unnaturally there were those, like Per Jacobsson of the Bank for International Settlements, who found it hard to reconcile this apparent enthusiasm for exchange depreciation with Hawtrey’s previous support for international stabilization schemes. To them his repiy was “the difference between what I now advocate and the programme of monetary stability is the difference between measures for treating a disease and measures for maintaining health when re-established. It is no use trying to stabilise a price ievel which leaves industry under-employed and working at a loss and makes half the debtors bankrupt.” Here, as always, Hawtrey was faithful to the logic of his system, which implied that if international central bank co-operation could not be achieved, each individual central bank must be free to pursue its own credit policy, without the constraint of fixed exchange rates.  [See my posts, "Hawtrey on Competitive Devaluations:  Bring It On, and "Hawtrey on the Short, but Sweet, 1933 Recovery."]

Cochrane asserts that the Fed has no power to raise nominal income. Does he believe that the Fed is unable to depreciate the dollar relative to other currencies? If so, does he believe that the Fed is less able to control the exchange rate of the dollar in relation to, say, the euro than the Swiss National Bank is able to control the value of the Swiss franc in relation to the euro? Just by coincidence, I wrote about the Swiss National Bank exactly one year ago in a post I called “The Swiss Naitonal Bank Teaches Us a Lesson.”  The Swiss National Bank, faced with a huge demand for Swiss francs, was in imminent danger of presiding over a disastrous deflation caused by the rapid appreciation of the Swiss franc against the euro. The Swiss National Bank could not fight deflation by cutting its bank rate, so it announced that it would sell unlimited quantities of Swiss francs at an exchange rate of 1.20 francs per euro, thereby preventing the Swiss franc from appreciating against the euro, and preventing domestic deflation in Switzerland. The action confounded those who claimed that the Swiss National Bank was powerless to prevent the franc from appreciating against the euro.

If the Fed wants domestic prices to rise, it can debauch the dollar by selling unlimited quantities of dollars in exchange for other currencies at exchange rates below their current levels. This worked for the US under FDR in 1933, and it worked for the Swiss National Bank in 2011. It has worked countless times for other central banks. What I would like to know is why Cochrane thinks that today’s Fed is less capable of debauching the currency today than FDR was in 1933 or the Swiss National Bank was in 2011?


About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 281 other followers


Follow

Get every new post delivered to your Inbox.

Join 281 other followers