Archive for the 'Taylor rule' Category

Milton Friedman and the Phillips Curve

In December 1967, Milton Friedman delivered his Presidential Address to the American Economic Association in Washington DC. In those days the AEA met in the week between Christmas and New Years, in contrast to the more recent practice of holding the convention in the week after New Years. That’s why the anniversary of Friedman’s 1967 address was celebrated at the 2018 AEA convention. A special session was dedicated to commemoration of that famous address, published in the March 1968 American Economic Review, and fittingly one of the papers at the session as presented by the outgoing AEA president Olivier Blanchard, who also wrote one of the papers discussed at the session. Other papers were written by Thomas Sargent and Robert Hall, and by Greg Mankiw and Ricardo Reis. The papers were discussed by Lawrence Summers, Eric Nakamura, and Stanley Fischer. An all-star cast.

Maybe in a future post, I will comment on the papers presented in the Friedman session, but in this post I want to discuss a point that has been generally overlooked, not only in the three “golden” anniversary papers on Friedman and the Phillips Curve, but, as best as I can recall, in all the commentaries I’ve seen about Friedman and the Phillips Curve. The key point to understand about Friedman’s address is that his argument was basically an extension of the idea of monetary neutrality, which says that the real equilibrium of an economy corresponds to a set of relative prices that allows all agents simultaneously to execute their optimal desired purchases and sales conditioned on those relative prices. So it is only relative prices, not absolute prices, that matter. Taking an economy in equilibrium, if you were suddenly to double all prices, relative prices remaining unchanged, the equilibrium would be preserved and the economy would proceed exactly – and optimally – as before as if nothing had changed. (There are some complications about what is happening to the quantity of money in this thought experiment that I am skipping over.) On the other hand, if you change just a single price, not only would the market in which that price is determined be disequilibrated, at least one, and potentially more than one, other market would be disequilibrated. The point here is that the real economy rules, and equilibrium in the real economy depends on relative, not absolute, prices.

What Friedman did was to argue that if money is neutral with respect to changes in the price level, it should also be neutral with respect to changes in the rate of inflation. The idea that you can wring some extra output and employment out of the economy just by choosing to increase the rate of inflation goes against the grain of two basic principles: (1) monetary neutrality (i.e., the real equilibrium of the economy is determined solely by real factors) and (2) Friedman’s famous non-existence (of a free lunch) theorem. In other words, you can’t make the economy as a whole better off just by printing money.

Or can you?

Actually you can, and Friedman himself understood that you can, but he argued that the possibility of making the economy as a whole better of (in the sense of increasing total output and employment) depends crucially on whether inflation is expected or unexpected. Only if inflation is not expected does it serve to increase output and employment. If inflation is correctly expected, the neutrality principle reasserts itself so that output and employment are no different from what they would have been had prices not changed.

What that means is that policy makers (monetary authorities) can cause output and employment to increase by inflating the currency, as implied by the downward-sloping Phillips Curve, but that simply reflects that actual inflation exceeds expected inflation. And, sure, the monetary authorities can always surprise the public by raising the rate of inflation above the rate expected by the public , but that doesn’t mean that the public can be perpetually fooled by a monetary authority determined to keep inflation higher than expected. If that is the strategy of the monetary authorities, it will lead, sooner or later, to a very unpleasant outcome.

So, in any time period – the length of the time period corresponding to the time during which expectations are given – the short-run Phillips Curve for that time period is downward-sloping. But given the futility of perpetually delivering higher than expected inflation, the long-run Phillips Curve from the point of view of the monetary authorities trying to devise a sustainable policy must be essentially vertical.

Two quick parenthetical remarks. Friedman’s argument was far from original. Many critics of Keynesian policies had made similar arguments; the names Hayek, Haberler, Mises and Viner come immediately to mind, but the list could easily be lengthened. But the earliest version of the argument of which I am aware is Hayek’s 1934 reply in Econometrica to a discussion of Prices and Production by Alvin Hansen and Herbert Tout in their 1933 article reviewing recent business-cycle literature in Econometrica in which they criticized Hayek’s assertion that a monetary expansion that financed investment spending in excess of voluntary savings would be unsustainable. They pointed out that there was nothing to prevent the monetary authority from continuing to create money, thereby continually financing investment in excess of voluntary savings. Hayek’s reply was that a permanent constant rate of monetary expansion would not suffice to permanently finance investment in excess of savings, because once that monetary expansion was expected, prices would adjust so that in real terms the constant flow of monetary expansion would correspond to the same amount of investment that had been undertaken prior to the first and unexpected round of monetary expansion. To maintain a rate of investment permanently in excess of voluntary savings would require progressively increasing rates of monetary expansion over and above the expected rate of monetary expansion, which would sooner or later prove unsustainable. The gist of the argument, more than three decades before Friedman’s 1967 Presidential address, was exactly the same as Friedman’s.

A further aside. But what Hayek failed to see in making this argument was that, in so doing, he was refuting his own argument in Prices and Production that only a constant rate of total expenditure and total income is consistent with maintenance of a real equilibrium in which voluntary saving and planned investment are equal. Obviously, any rate of monetary expansion, if correctly foreseen, would be consistent with a real equilibrium with saving equal to investment.

My second remark is to note the ambiguous meaning of the short-run Phillips Curve relationship. The underlying causal relationship reflected in the negative correlation between inflation and unemployment can be understood either as increases in inflation causing unemployment to go down, or as increases in unemployment causing inflation to go down. Undoubtedly the causality runs in both directions, but subtle differences in the understanding of the causal mechanism can lead to very different policy implications. Usually the Keynesian understanding of the causality is that it runs from unemployment to inflation, while a more monetarist understanding treats inflation as a policy instrument that determines (with expected inflation treated as a parameter) at least directionally the short-run change in the rate of unemployment.

Now here is the main point that I want to make in this post. The standard interpretation of the Friedman argument is that since attempts to increase output and employment by monetary expansion are futile, the best policy for a monetary authority to pursue is a stable and predictable one that keeps the economy at or near the optimal long-run growth path that is determined by real – not monetary – factors. Thus, the best policy is to find a clear and predictable rule for how the monetary authority will behave, so that monetary mismanagement doesn’t inadvertently become a destabilizing force causing the economy to deviate from its optimal growth path. In the 50 years since Friedman’s address, this message has been taken to heart by monetary economists and monetary authorities, leading to a broad consensus in favor of inflation targeting with the target now almost always set at 2% annual inflation. (I leave aside for now the tricky question of what a clear and predictable monetary rule would look like.)

But this interpretation, clearly the one that Friedman himself drew from his argument, doesn’t actually follow from the argument that monetary expansion can’t affect the long-run equilibrium growth path of an economy. The monetary neutrality argument, being a pure comparative-statics exercise, assumes that an economy, starting from a position of equilibrium, is subjected to a parametric change (either in the quantity of money or in the price level) and then asks what will the new equilibrium of the economy look like? The answer is: it will look exactly like the prior equilibrium, except that the price level will be twice as high with twice as much money as previously, but with relative prices unchanged. The same sort of reasoning, with appropriate adjustments, can show that changing the expected rate of inflation will have no effect on the real equilibrium of the economy, with only the rate of inflation and the rate of monetary expansion affected.

This comparative-statics exercise teaches us something, but not as much as Friedman and his followers thought. True, you can’t get more out of the economy – at least not for very long – than its real equilibrium will generate. But what if the economy is not operating at its real equilibrium? Even Friedman didn’t believe that the economy always operates at its real equilibrium. Just read his Monetary History of the United States. Real-business cycle theorists do believe that the economy always operates at its real equilibrium, but they, unlike Friedman, think monetary policy is useless, so we can forget about them — at least for purposes of this discussion. So if we have reason to think that the economy is falling short of its real equilibrium, as almost all of us believe that it sometimes does, why should we assume that monetary policy might not nudge the economy in the direction of its real equilibrium?

The answer to that question is not so obvious, but one answer might be that if you use monetary policy to move the economy toward its real equilibrium, you might make mistakes sometimes and overshoot the real equilibrium and then bad stuff would happen and inflation would run out of control, and confidence in the currency would be shattered, and you would find yourself in a re-run of the horrible 1970s. I get that argument, and it is not totally without merit, but I wouldn’t characterize it as overly compelling. On a list of compelling arguments, I would put it just above, or possibly just below, the domino theory on the basis of which the US fought the Vietnam War.

But even if the argument is not overly compelling, it should not be dismissed entirely, so here is a way of taking it into account. Just for fun, I will call it a Taylor Rule for the Inflation Target (IT). Let us assume that the long-run inflation target is 2% and let us say that (YY*) is the output gap between current real GDP and potential GDP (i.e., the GDP corresponding to the real equilibrium of the economy). We could then define the following Taylor Rule for the inflation target:

IT = α(2%) + β((YY*)/ Y*).

This equation says that the inflation target in any period would be a linear combination of the default Inflation Target of 2% times an adjustment coefficient α designed to keep successively chosen Inflation targets from deviating from the long-term price-level-path corresponding to 2% annual inflation and some fraction β of the output gap expressed as a percentage of potential GDP. Thus, for example, if the output gap was -0.5% and β was 0.5, the short-term Inflation Target would be raised to 4.5% if α were 1.

However, if on average output gaps are expected to be negative, then α would have to be chosen to be less than 1 in order for the actual time path of the price level to revert back to a target price-level corresponding to a 2% annual rate.

Such a procedure would fit well with the current dual inflation and employment mandate of the Federal Reserve. The long-term price level path would correspond to the price-stability mandate, while the adjustable short-term choice of the IT would correspond to and promote the goal of maximum employment by raising the inflation target when unemployment was high as a countercyclical policy for promoting recovery. But short-term changes in the IT would not be allowed to cause a long-term deviation of the price level from its target path. The dual mandate would ensure that relatively higher inflation in periods of high unemployment would be compensated for by periods of relatively low inflation in periods of low unemployment.

Alternatively, you could just target nominal GDP at a rate consistent with a long-run average 2% inflation target for the price level, with the target for nominal GDP adjusted over time as needed to ensure that the 2% average inflation target for the price level was also maintained.

Advertisements

What Is the Historically Challenged, Rule-Worshipping John Taylor Talking About?

A couple of weeks ago, I wrote a post chiding John Taylor for his habitual verbal carelessness. As if that were not enough, Taylor, in a recent talk at the IMF, appearing on a panel on monetary policy with former Fed Chairman Ben Bernanke and the former head of the South African central bank, Gill Marcus,  extends his trail of errors into new terrain: historical misstatement. Tony Yates and Paul Krugman have already subjected Taylor’s talk to well-deserved criticism for its conceptual confusion, but I want to focus on the outright historical errors Taylor blithely makes in his talk, a talk noteworthy, apart from its conceptual confusion and historical misstatements, for the incessant repetition of the meaningless epithet “rules-based,” as if he were a latter-day Homeric rhapsodist incanting a sacred text.

Taylor starts by offering his own “mini history of monetary policy in the United States” since the late 1960s.

When I first started doing monetary economics . . ., monetary policy was highly discretionary and interventionist. It went from boom to bust and back again, repeatedly falling behind the curve, and then over-reacting. The Fed had lofty goals but no consistent strategy. If you measure macroeconomic performance as I do by both price stability and output stability, the results were terrible. Unemployment and inflation both rose.

What Taylor means by “interventionist,” other than establishing that he is against it, is not clear. Nor is the meaning of “bust” in this context. The recession of 1970 was perhaps the mildest of the entire post-World War II era, and the 1974-75 recession was certainly severe, but it was largely the result of a supply shock and politically imposed wage and price controls exacerbated by monetary tightening. (See my post about 1970s stagflation.) Taylor talks about the Fed’s lofty goals, but doesn’t say what they were. In fact in the 1970s, the Fed was disclaiming responsibility for inflation, and Arthur Burns, a supposedly conservative Republican economist, appointed by Nixon to be Fed Chairman, actually promoted what was then called an “incomes policy,” thereby enabling and facilitating Nixon’s infamous wage-and-price controls. The Fed’s job was to keep aggregate demand high, and, in the widely held view at the time, it was up to the politicians to keep business and labor from getting too greedy and causing inflation.

Then in the early 1980s policy changed. It became more focused, more systematic, more rules-based, and it stayed that way through the 1990s and into the start of this century.

Yes, in the early 1980s, policy did change, and it did become more focused, and for a short time – about a year and a half – it did become more rules-based. (I have no idea what “systematic” means in this context.) And the result was the sharpest and longest post-World War II downturn until the Little Depression. Policy changed, because, under Volcker, the Fed took ownership of inflation. It became more rules-based, because, under Volcker, the Fed attempted to follow a modified sort of Monetarist rule, seeking to keep the growth of the monetary aggregates within a pre-determined target range. I have explained in my book and in previous posts (e.g., here and here) why the attempt to follow a Monetarist rule was bound to fail and why the attempt would have perverse feedback effects, but others, notably Charles Goodhart (discoverer of Goodhart’s Law), had identified the problem even before the Fed adopted its misguided policy. The recovery did not begin until the summer of 1982 after the Fed announced that it would allow the monetary aggregates to grow faster than the Fed’s targets.

So the success of the Fed monetary policy under Volcker can properly be attributed to a) to the Fed’s taking ownership of inflation and b) to its decision to abandon the rules-based policy urged on it by Milton Friedman and his Monetarist acolytes like Alan Meltzer whom Taylor now cites approvingly for supporting rules-based policies. The only monetary policy rule that the Fed ever adopted under Volcker having been scrapped prior to the beginning of the recovery from the 1981-82 recession, the notion that the Great Moderation was ushered in by the Fed’s adoption of a “rules-based” policy is a total misrepresentation.

But Taylor is not done.

Few complained about spillovers or beggar-thy-neighbor policies during the Great Moderation.  The developed economies were effectively operating in what I call a nearly international cooperative equilibrium.

Really! Has Professor Taylor, who served as Under Secretary of the Treasury for International Affairs ever heard of the Plaza and the Louvre Accords?

The Plaza Accord or Plaza Agreement was an agreement between the governments of France, West Germany, Japan, the United States, and the United Kingdom, to depreciate the U.S. dollar in relation to the Japanese yen and German Deutsche Mark by intervening in currency markets. The five governments signed the accord on September 22, 1985 at the Plaza Hotel in New York City. (“Plaza Accord” Wikipedia)

The Louvre Accord was an agreement, signed on February 22, 1987 in Paris, that aimed to stabilize the international currency markets and halt the continued decline of the US Dollar caused by the Plaza Accord. The agreement was signed by France, West Germany, Japan, Canada, the United States and the United Kingdom. (“Louvre Accord” Wikipedia)

The chart below shows the fluctuation in the trade weighted value of the US dollar against the other major trading currencies since 1980. Does it look like there was a nearly international cooperative equilibrium in the 1980s?

taylor_dollar_tradeweighted

But then there was a setback. The Fed decided to hold the interest rate very low during 2003-2005, thereby deviating from the rules-based policy that worked well during the Great Moderation.  You do not need policy rules to see the change: With the inflation rate around 2%, the federal funds rate was only 1% in 2003, compared with 5.5% in 1997 when the inflation rate was also about 2%.

Well, in 1997 the expansion was six years old and the unemployment rate was under 5% and falling. In 2003, the expansion was barely under way and unemployment was rising above 6%.

I could provide other dubious historical characterizations that Taylor makes in his talk, but I will just mention a few others relating to the Volcker episode.

Some argue that the historical evidence in favor of rules is simply correlation not causation.  But this ignores the crucial timing of events:  in each case, the changes in policy occurred before the changes in performance, clear evidence for causality.  The decisions taken by Paul Volcker came before the Great Moderation.

Yes, and as I pointed out above, inflation came down when Volcker and the Fed took ownership of the inflation, and were willing to tolerate or inflict sufficient pain on the real economy to convince the public that the Fed was serious about bringing the rate of inflation down to a rate of roughly 4%. But the recovery and the Great Moderation did not begin until the Fed renounced the only rule that it had ever adopted, namely targeting the rate of growth of the monetary aggregates. The Fed, under Volcker, never even adopted an explicit inflation target, much less a specific rule for setting the Federal Funds rate. The Taylor rule was just an ex post rationalization of what the Fed had done by instinct.

Another point relates to the zero bound. Wasn’t that the reason that the central banks had to deviate from rules in recent years? Well it was certainly not a reason in 2003-2005 and it is not a reason now, because the zero bound is not binding. It appears that there was a short period in 2009 when zero was clearly binding. But the zero bound is not a new thing in economics research. Policy rule design research took that into account long ago. The default was to move to a stable money growth regime not to massive asset purchases.

OMG! Is Taylor’s preferred rule at the zero lower bound the stable money growth rule that Volcker tried, but failed, to implement in 1981-82? Is that the lesson that Taylor wants us to learn from the Volcker era?

Some argue that rules based policy for the instruments is not needed if you have goals for the inflation rate or other variables. They say that all you really need for effective policy making is a goal, such as an inflation target and an employment target. The rest of policymaking is doing whatever the policymakers think needs to be done with the policy instruments. You do not need to articulate or describe a strategy, a decision rule, or a contingency plan for the instruments. If you want to hold the interest rate well below the rule-based strategy that worked well during the Great Moderation, as the Fed did in 2003-2005, then it’s ok as long as you can justify it at the moment in terms of the goal.

This approach has been called “constrained discretion” by Ben Bernanke, and it may be constraining discretion in some sense, but it is not inducing or encouraging a rule as a “rules versus discretion” dichotomy might suggest.  Simply having a specific numerical goal or objective is not a rule for the instruments of policy; it is not a strategy; it ends up being all tactics.  I think the evidence shows that relying solely on constrained discretion has not worked for monetary policy.

Taylor wants a rule for the instruments of policy. Well, although Taylor will not admit it, a rule for the instruments of policy is precisely what Volcker tried to implement in 1981-82 when he was trying — and failing — to target the monetary aggregates, thereby driving the economy into a rapidly deepening recession, before escaping from the positive-feedback loop in which he and the economy were trapped by scrapping his monetary growth targets. Since 2009, Taylor has been calling for the Fed to raise the currently targeted instrument, the Fed Funds rate, even though inflation has been below the Fed’s 2% target almost continuously for the past three years. Not only does Taylor want to target the instrument of policy, he wants the instrument target to preempt the policy target. If that is not all tactics and no strategy, I don’t know what is.

The Verbally Challenged John Taylor Strikes Again

John Taylor, tireless self-promoter of “rules-based monetary policy” (whatever that means), inventor of the legendary Taylor Rule, and very likely the next Chairman of the Federal Reserve Board if a Republican is elected President of the United States in 2016, has a history of verbal faux pas, which I have been documenting not very conscientiously for almost three years now.

Just to review my list (for which I make no claim of exhaustiveness), Professor Taylor was awarded the Hayek Prize of the Manhattan Institute in 2012 for his book First Principles: Five Keys to Restoring America’s Prosperity. The winner of the prize (a cash award of $50,000) also delivers a public Hayek Lecture in New York City to a distinguished audience consisting of wealthy and powerful and well-connected New Yorkers, drawn from the city’s financial, business, political, journalistic, and academic elites. The day before delivering his public lecture, Professor Taylor published a teaser as an op-ed in that paragon of journalistic excellence the Wall Street Journal editorial page. (This is what I had to say when it was published.)

In his teaser, Professor Taylor invoked Hayek’s Road to Serfdom and his Constitution of Liberty to explain the importance of the rule of law and its relationship to personal freedom. Certainly Hayek had a great deal to say and a lot of wisdom to impart on the subjects of the rule of law and personal freedom, but Professor Taylor, though the winner of the Hayek Prize, was obviously not interested enough to read Hayek’s chapter on monetary policy in The Constitution of Liberty; if he had he could not possibly have made the following assertions.

Stripped of all technicalities, this means that government in all its actions is bound by rules fixed and announced beforehand—rules which make it possible to foresee with fair certainty how the authority will use its coercive powers in given circumstances and to plan one’s individual affairs on the basis of this knowledge. . . .

Rules for monetary policy do not mean that the central bank does not change the instruments of policy (interest rates or the money supply) in response to events, or provide loans in the case of a bank run. Rather they mean that they take such actions in a predictable manner.

But guess what. Hayek took a view rather different from Taylor’s in The Constitution of Liberty:

[T]he case against discretion in monetary policy is not quite the same as that against discretion in the use of the coercive powers of government. Even if the control of money is in the hands of a monopoly, its exercise does not necessarily involve coercion of private individuals. The argument against discretion in monetary policy rests on the view that monetary policy and its effects should be as predictable as possible. The validity of the argument depends, therefore, on whether we can devise an automatic mechanism which will make the effective supply of money change in a more predictable and less disturbing manner than will any discretionary measures likely to be adopted. The answer is not certain.

Now that was bad enough – quoting Hayek as an authority for a position that Hayek explicitly declined to take in the very source invoked by Professor Taylor. But that was just Professor Taylor’s teaser. Perhaps it got a bit garbled in the teasing process. So I went to the Manhattan Institute website and watched the video of the entire Hayek Lecture delivered by Professor Taylor. But things got even worse in the lecture – much worse. I mean disastrously worse. (This is what I had to say after watching the video.)

Taylor, while of course praising Hayek at length, simply displayed an appalling ignorance of Hayek’s writings and an inability to comprehend, or a carelessness so egregious that he was unable to properly read, the title — yes, the title! — of a pamphlet written by Hayek in the 1970s, when inflation was reaching the double digits in the US and much of Europe. The pamphlet, entitled Full Employment at any Price?, was an argument that the pursuit of full employment as an absolute goal, with no concern for price stability, would inevitably lead to accelerating inflation. The title was chosen to convey the idea that the pursuit of full employment was not without costs and that a temporary gain in employment at the cost of higher inflation might well not be worth it. Professor Taylor, however, could not even read the title correctly, construing the title as prescriptive, and — astonishingly — presuming that Hayek was advocating the exact policy that the pamphlet was written to confute.

Perhaps Professor Taylor was led to this mind-boggling misinterpretation by a letter from Milton Friedman, cited by Taylor, complaining about Hayek’s criticism in the pamphlet in question of Friedman’s dumb 3-perceent rule, to which criticism Friedman responded in his letter to Hayek. But Professor Taylor, unable to understand what Hayek and Friedman were arguing about, bewilderingly assumed that Friedman was criticizing Hayek’s advocacy of increasing the rate of inflation to whatever level was needed to ensure full employment, culminating in this ridiculous piece of misplaced condescension.

Well, once again, Milton Friedman, his compatriot in his cause — and it’s good to have compatriots by the way, very good to have friends in his cause. He wrote in another letter to Hayek – Hoover Archives – “I hate to see you come out, as you do here, for what I believe to be one of the most fundamental violations of the rule of law that we have, namely, discretionary activities of central bankers.”

So, hopefully, that was enough to get everybody back on track. Actually, this episode – I certainly, obviously, don’t mean to suggest, as some people might, that Hayek changed his message, which, of course, he was consistent on everywhere else.

And all of this wisdom was delivered by Professor Taylor in his Hayek Lecture upon being awarded the Hayek Prize. Well done, Professor Taylor, well done.

Then last July, in another Wall Street Journal op-ed, Professor Taylor replied to Alan Blinder’s criticism of a bill introduced by House Republicans to require the Fed to use the Taylor Rule as its method for determining what its target would be for the Federal Funds rate. The title of the op-ed was “John Taylor’s reply to Alan Blinder,” and the subtitle was “The Fed’s ad hoc departures from rule-based monetary policy has [sic!] hurt the economy.” When I pointed out the grammatical error, and wondered whether the mistake was attributable to Professor Taylor or stellar editorial writers employed by the Wall Street Journal editorial page, David Henderson, a frequent contributor to the Journal, wrote a comment to assure me that it was certainly not Professor Taylor’s mistake. I took Henderson’s word for it. (Just for the record, the mistake is still there, you can look it up.)

But now there’s this. In today’s New York Times, there is an article about how, in an earlier era, criticism of the Fed came mainly from Democrats complaining about money being too tight and interests rates too high, while now criticism comes mainly from Republicans complaining that money is too easy and interest rates too low. At the end of the article we find this statement from Professor Taylor:

Practical experience and empirical studies show that checklist-free medical care is wrought with dangers just as rules-free monetary policy is,” Mr. Taylor wrote in a recent defense of his proposal.

There he goes again. Here are five definitions of “wrought” from the online Merriam-Webster dictionary:

1:  worked into shape by artistry or effort <carefully wrought essays>

2:  elaborately embellished :  ornamented

3:  processed for use :  manufactured <wrought silk>

4:  beaten into shape by tools :  hammered —used of metals

5:  deeply stirred :  excited —often used with up <gets easily wrought up over nothing>

Obviously, what Professor Taylor meant to say is that medical care is “fraught” (rhymes with “wrought”) with dangers, but some people just can’t be bothered with pesky little details like that, any more than winners of the Hayek Prize can be bothered with actually reading the works of Hayek to which they refer in their Hayek Lecture. Let’s just hope that if Professor Taylor’s ambition to become Fed Chairman is realized, he’ll be a little bit more attentive to, say, the position of decimal points than he is to the placement of question marks and to the difference in meaning between words that sound almost alike.

PS I see that the Manhattan Institute has chosen James Grant as the winner of the 2015 Hayek Prize for his book America’s Forgotten Depression. I’m sure that 2015 Hayek Lecture will be far more finely wrought grammatically and stylistically than the 2012 Hayek Lecture, but, judging from book for which the prize was awarded, I am not overly optimistic that it will make a great deal more sense than the 2012 Hayek Lecture, but that is not a very high bar to clear.

So Many QE-Bashers, So Little Time

Both the Financial Times and the Wall Street Journal have been full of articles and blog posts warning of the ill-effects of QE3. In my previous post, I discussed the most substantial of the recent anti-QE discussions. I was going to do a survey of some of the others that I have seen, but today all I can manage is a comment on one of them.

In the Wall Street Journal, Benn Steil, director of international economics at the Council of Foreign Relations, winner of the 2010 Hayek Book Award for his book Money, Markets, and Sovereignty (co-authored with Manuel Hinds), and Dinah Walker, an analyst at the CFR, complain that since 2000, the Fed has stopped following the Taylor Rule, to which it supposedly adhered from 1987 to 1999 during a period of exceptional monetary stability, and, from 2000 to the present, the Fed supposedly abandoned the rule. This is a familiar argument endlessly repeated by none other than John Taylor, himself. But as I recently pointed out, Taylor has implicitly at least, conceded that the supposedly non-discretionary, uncertainty-minimizing, Taylor rule comes in multiple versions, and, notwithstanding Taylor’s current claim that he prefers the version that he originally proposed in 1993, he is unable to provide any compelling reason – other than his own exercise of discretion — why that version is entitled to any greater deference than alternative versions of the rule.

Despite the inability of the Taylor rule to specify a unique value, or even a narrow range of values, of the target for the Fed Funds rate, Steil and Walker, presumably taking Taylor’s preferred version as canonical, make the following assertion about the difference between how the Fed Funds rate was set in the 1987-99 period compared how it was set in the 2000-08 period.

Between 1987, when Alan Greenspan became Fed chairman, and 1999 a neat approximation of how the Fed responded to market signals was captured by the Taylor Rule. Named for John Taylor, the Stanford economist who introduced the rule in 1993, it stipulated that the fed-funds rate, which banks use to set interest rates, should be nudged up or down proportionally to changes in inflation and economic output. By our calculations, the Taylor Rule explained 69% of the variation in the fed-funds rate over that period. (In the language of statistics, the relationship between the rule and the rate had an R-squared of .69.)

Then came a dramatic change. Between 2000 and 2008, when the Fed cut the fed-funds target rate to near zero, the R-squared collapsed to .35. The Taylor Rule was clearly no longer guiding U.S. monetary policy.

This is a pretty extravagant claim. The 1987-99 period was marked by a single recession, a recession triggered largely by a tightening of monetary policy when inflation was rising above the 3.5 to 4 percent range that was considered acceptable after the Volcker disinflation in the early 1980s. So the 1992 recession was triggered by the application of Taylor rule, and the recession triggered a response that was consistent with the Taylor rule. The 2000-08 period was marked by two recessions, both of which were triggered by financial stresses, not rising inflation.  To say that the Fed abandoned a rule that it was following in the earlier period is simply to say that circumstances that the Fed did not have to face in the 1987-99 period confronted the Fed in the 2000-08 period. The difference in the R-squared found by Steil and Watson may indicate no more than the more variable economic environment in the latter period than the former.

As I pointed out in my recent post (hyper-linked above) on the multiple Taylor rules, following the Taylor rule in 2008 would have meant targeting the Fed Funds rate for most of 2008 at an even higher level than the disastrously high rate that the Fed was targeting in 2008 while the economy was already in recession and entering, even before the Lehman debacle, one of the sharpest contractions since World War II. Indeed, Taylor’s preferred version implied that the Fed should have increased (!) the Fed Funds rate in the spring of 2008.

Steil and Watkins attribute the Fed’s deviation from the Taylor rule to an implicit strategy of targeting asset prices.

In a now-famous speech invoking the analogy of a “helicopter drop of money,” [Bernanke] argued that monetary interventions that boosted asset values could help combat deflation risk by lowering the cost of capital and improving the balance sheets of potential borrowers.

Mr. Bernanke has since repeatedly highlighted asset-price movements as a measure of policy success. In 2003 he argued that “unanticipated changes in monetary policy affect stock prices . . . by affecting the perceived riskiness of stocks,” suggesting an explicit reason for using monetary policy to affect the public’s appetite for stocks. And this past February he noted that “equity prices [had] risen significantly” since the Fed began reinvesting maturing securities.

This is a tendentious misreading of Bernanke’s statements. He is not targeting stock prices, but he is arguing that movements in stock prices are correlated with expectations about the future performance of the economy, so that rising stock prices in response to a policy decision of the Fed provide some evidence that the policy has improved economic conditions. Why should that be controversial?

Steil and Watkins then offer a strange statistical “test” of their theory that the Fed is targeting stock prices.

Between 2000 and 2008, the level of household risk aversion—which we define as the ratio of household currency holdings, bank deposits and money-market funds to total household financial assets—explained a remarkable 77% of the variation in the fed-funds rate (an R-squared of .77). In other words, the Fed was behaving as if it were targeting “risk on, risk off,” moving interest rates to push investors toward or away from risky assets.

What Steil and Watkins are measuring by their “ratio of household risk aversion” is liquidity preference or the demand for money. They seem to have a problem with the Fed acting to accommodate the public’s demand for liquidity. The alternative to the Fed’s accommodating a demand for liquidity is to have that demand manifested in deflation. That’s what happened in 1929-33, when the Fed deliberately set out to combat stock-market speculation by raising interest rates until the stock market crashed, and only then reduced rates to 2%, which, in an environment of rapidly falling prices, was still a ferociously tight monetary policy. The .77 R-squared that Steil and Watkins find reflects the fact, for which we can all be thankful, that the Fed has at least prevented a deflationary catastrophe from overtaking the US economy.

The fact is that what mainly governs the level of stock prices is expectations about the future performance of the economy. If the Fed takes seriously its dual mandate, then it is necessarily affecting the level of stock prices. That is something very different from the creation of a “Bernanke put” in which the Fed is committed to “ease monetary policy whenever there is a stock market correction.” I don’t know why some people have a problem understanding the difference.  But they do, or at least act as if they do.

Taylor Rules?

John Taylor recently had a post on his blog with the accompanying graph showing the actual Fed Funds rate target of the Fed since 2005 and the Fed Funds rate implied by two versions of the Taylor rule, one that he specifically proposed and another used in a study by Janet Yellen that Taylor, in a 1999 paper, had mentioned as a possible alternative version of his rule. Taylor has subsequently tried to put some distance between himself and the alternative version, the alternative version implying a far lower optimal interest-rate target than the version that he now professes to prefer.  But while not explicitly endorsing it when first mentioning it as an alternative, neither did Taylor express any reservations about the alternative, providing no hint that he considered it to be inconsistent with the spirit of his rule or to be obviously inferior to his own previous version, for which he now insists he has a preference.

What I find especially noteworthy, aside from the remarkable fact that, as Scott Sumner noted, Taylor’s preferred rule would have called for a rate increase in early 2008, when the economy was already in recession, and on the verge of one of the sharpest one-quarter declines in real GDP on record, in the third quarter of 2008 even before the Lehman panic of September-October, is that both versions of the Taylor rule implied a target interest rate substantially higher than the Fed Funds rate actually in effect for  most of 2008. So Taylor is implicitly endorsing a far tighter monetary policy in 2008, after the economy had already entered a recession and started a rapid contraction, than the disastrously tight policy to which the economy was then being subjected by the FOMC.

Now, in fairness to Taylor, he could argue that the difficulties all stemmed from the prolonged period of very low interest rates following the 2001 recession. But that simply underscores the inherent unworkability of a mechanical rule of the type that Taylor is so enamored by. Conditions are rarely ideal, so you can never be sure that the interest rate implied by the Taylor rule (of whichever version) is preferable to the rate chosen at the discretion of the monetary authority. In retrospect, some of the time the FOMC seems to have done better than the Taylor rules, and some of the time one or both of the Taylor rules seem to have done better than the FOMC. Not exactly an overwhelmingly good performance. So why should anyone assume that adopting the Taylor rule would be an improvement, all things considered, over the exercise of discretion?

Taylor wants to argue that the exercise of discretion is bad in and of itself. But which is The Taylor rule? Taylor likes one version of the rule, but he can’t provide any argument that the Taylor rule that he prefers is better than the one that he now says that he doesn’t prefer, though no such preference was expressed when he first mentioned the alternative version. And even now, though he claims to like one version better than the other, he can only conclude his post by saying that more research on the relative merits of the rules is necessary. In other words, adopting the Taylor rule is not sufficient to eliminate policy uncertainty, as the gap in the diagram between the rates implied by the two rules clearly indicates.

The upshot of all this is just that for Taylor to suggest that adopting his rule would somehow reduce policy uncertainty when there is clearly no way to specify the parameters necessary to generate a predictable value for the interest rate target implied by the rule is simply disingenuous.  Moreover, to suggest that there is any evidence that following the Taylor rule (whatever such a vague and imprecise concept can possibly mean) would have led to better outcomes than the not very impressive performance of the FOMC is just laughable.

PS This will be my last post until next week after the Jewish New Year. My best wishes go out to all for a happy, healthy, and peaceful New Year.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,626 other followers

Follow Uneasy Money on WordPress.com