Archive for the 'QE' Category



Mrs. Merkel Lives in a World of Her Own

I woke up today to read the following on the front page of the Financial Times (“Merkel highlights Eurozone divisions with observations on interest rates”).

Angela Merkel underlined the gulf at the heart of the eurozone when she waded into interest-rate policy, arguing that, taken in isolation, Germany would need higher rates, in contrast to southern states that are crying out for looser monetary policy.

The German chancellor’s highly unusual intervention on Thursday, a week before many economists expect the independent European Central Bank to cut its main interest rate, highlights how the economies of the prosperous north and austerity-hit south remain far apart.

What could Mrs. Merkel possibly have meant by this remark? Presumably she means that inflation in Germany is higher than she would like it to be, so that her preference would be that the ECB raise its lending rate, thereby tightening monetary policy for the entire Eurozone in order to bring down the German rate of inflation (which is now less than 2 percent under every measure). The question is why did she bother to say this? My guess is that she is trying to make herself look as if she is being solicitous of the poor unfortunates who constitute the rest of the Eurozone, those now suffering from a widening and deepening recession.

Her message is: “Look, if I had my way, I would raise interest rates, forcing an even deeper recession and even more pain on the rest of you moochers. But, tender-hearted softy that I am, I am not going to do that. I will settle for keeping the ECB lending rate at its current level, or maybe, if you bow and scrape enough, I might, just might, allow the ECB to cut the rate by a quarter of a percent. But don’t think for even a minute that I am going to allow the ECB to follow the Fed and the Bank of Japan in adopting any kind of radical, inflationist quantitative easing.”

So the current German rate of inflation of 1-2% is too high for Mrs. Merkel. The adjustment in relative prices between Germany and the rest of Eurozone requires that prices and wages in the rest of the Eurozone fall relative to prices and wages in Germany. Mrs. Merkel says that she will not allow inflation in Germany to go above 1-2%. What does that say about what must happen to prices and wages in the rest of the Eurozone? Do the math. So if Mrs. Merkel has her way — and she clearly speaks with what Mark Twain once called “the calm confidence of a Christian holding four aces” – things will continue to get worse, probably a lot worse, in the Eurozone before they get any better. Get used to it.

Too Little, Too Late?

The FOMC, after over four years of overly tight monetary policy, seems to be feeling its way toward an easier policy stance. But will it do any good? Unfortunately, there is reason to doubt that it will. The FOMC statement pledges to continue purchasing $85 billion a month of Treasuries and mortgage-backed securities and to keep interest rates at current low levels until the unemployment rate falls below 6.5% or the inflation rate rises above 2.5%. In other words, the Fed is saying that it will tolerate an inflation rate only marginally higher than the current target for inflation before it begins applying the brakes to the expansion. Here is how the New York Times reported on the Fed announcement.

The Federal Reserve said Wednesday it planned to hold short-term interest rates near zero so long as the unemployment rate remains above 6.5 percent, reinforcing its commitment to improve labor market conditions.

The Fed also said that it would continue in the new year its monthly purchases of $85 billion in Treasury bonds and mortgage-backed securities, the second prong of its effort to accelerate economic growth by reducing borrowing costs.

But Fed officials still do not expect the unemployment rate to fall below the new target for at least three more years, according to forecasts also published Wednesday, and they chose not to expand the Fed’s stimulus campaign.

In fairness to the FOMC, the Fed, although technically independent, must operate within an implicit consensus on what kind of decisions it can take, its freedom of action thereby being circumscribed in the absence of a clear signal of support from the administration for a substantial departure from the terms of the implicit consensus. For the Fed to substantially raise its inflation target would risk a political backlash against it, and perhaps precipitate a deep internal split within the Fed’s leadership. At the depth of the financial crisis and in its immediate aftermath, perhaps Chairman Bernanke, if he had been so inclined, might have been able to effect a drastic change in monetary policy, but that window of opportunity closed quickly once the economy stopped contracting and began its painfully slow pseudo recovery.

As I have observed a number of times (here, here, and here), the paradigm for the kind of aggressive monetary easing that is now necessary is FDR’s unilateral decision to take the US off the gold standard in 1933. But FDR was a newly elected President with a massive electoral mandate, and he was making decisions in the midst of the worst economic crisis in modern times. Could an unelected technocrat (or a collection of unelected technocrats) take such actions on his (or their) own? From the get-go, the Obama administration showed no inclination to provide any significant input to the formulation of monetary policy, either out of an excess of scruples about Fed independence or out of a misguided belief that monetary policy was powerless to affect the economy when interest rates were close to zero.

Stephen Williamson, on his blog, consistently gives articulate expression to the doctrine of Fed powerlessness. In a post yesterday, correctly anticipating that the Fed would continue its program of buying mortgage backed securities and Treasuries, and would tie its policy to numerical triggers relating to unemployment, Williamson disdainfully voiced his skepticism that the Fed’s actions would have any positive effect on the real performance of the economy, while registering his doubts that the Fed would be any more successful in preventing inflation from getting out of hand while attempting to reduce unemployment than it was in the 1970s.

It seems to me that Williamson reaches this conclusion based on the following premises. The Fed has little or no control over interest rates or inflation, and the US economy is not far removed from its equilibrium growth path. But Williamson also believes that the Fed might be able to increase inflation, and that that would be a bad thing if the Fed were actually to do so.  The Fed can’t do any good, but it could do harm.

Williamson is fairly explicit in saying that he doubts the ability of positive QE to stimulate, and negative QE (which, I guess, might be called QT) to dampen real or nominal economic activity.

Short of a theory of QE – or more generally a serious theory of the term structure of interest rates – no one has a clue what the effects are, if any. Until someone suggests something better, the best guess is that QE is irrelevant. Any effects you think you are seeing are either coming from somewhere else, or have to do with what QE signals for the future policy rate. The good news is that, if it’s irrelevant, it doesn’t do any harm. But if the FOMC thinks it works when it doesn’t, that could be a problem, in that negative QE does not tighten, just as positive QE does not ease.

But Williamson seems a bit uncertain about the effects of “forward guidance” i.e., the Fed’s commitment to keep interest rates low for an extended period of time, or until a trigger is pulled e.g., unemployment falls below a specified level. This is where Williamson sees a real potential for mischief.

(1)To be well-understood, the triggers need to be specified in a very simple form. As such it seems as likely that the Fed will make a policy error if it commits to a trigger as if it commits to a calendar date. The unemployment rate seems as good a variable as any to capture what is going on in the real economy, but as such it’s pretty bad. It’s hardly a sufficient statistic for everything the Fed should be concerned with.

(2)This is a bad precedent to set, for two reasons. First, the Fed should not be setting numerical targets for anything related to the real side of the dual mandate. As is well-known, the effect of monetary policy on real economic activity is transient, and the transmission process poorly understood. It would be foolish to pretend that we know what the level of aggregate economic activity should be, or that the Fed knows how to get there. Second, once you convince people that triggers are a good idea in this “unusual” circumstance, those same people will wonder what makes other circumstances “normal.” Why not just write down a Taylor rule for the Fed, and send the FOMC home? Again, our knowledge of how the economy works, and what future contingencies await us, is so bad that it seems optimal, at least to me, that the Fed make it up as it goes along.

I agree that a fixed trigger is a very blunt instrument, and it is hard to know what level to set it at. In principle, it would be preferable if the trigger were not pulled automatically, but only as a result of some exercise of discretionary judgment by the part of the monetary authority; except that the exercise of discretion may undermine the expectational effect of setting a trigger. Williamson’s second objection strikes me as less persuasive than the first. It is at least misleading, and perhaps flatly wrong, to say that the effect of monetary policy on real economic activity is transient. The standard argument for the ineffectiveness of monetary policy involves an exercise in which the economy starts off at equilibrium. If you take such an economy and apply a monetary stimulus to it, there is a plausible (but not necessarily unexceptionable) argument that the long-run effect of the stimulus will be nil, and any transitory gain in output and employment may be offset (or outweighed) by a subsequent transitory loss. But if the initial position is out of equilibrium, I am unaware of any plausible, let alone compelling, argument that monetary stimulus would not be effective in hastening the adjustment toward equilibrium. In a trivial sense, the effect of monetary policy is transient inasmuch as the economy would eventually reach an equilibrium even without monetary stimulus. However, unlike the case in which monetary stimulus is applied to an economy in equilibrium, applying monetary policy to an economy out of equilibrium can produce short-run gains that aren’t wiped out by subsequent losses. I am not sure how to interpret the rest of Williamson’s criticism. One might almost interpret him as saying that he would favor a policy of targeting nominal GDP (which bears a certain family resemblance to the Taylor rule), a policy that would also address some of the other concerns Williamson has about the Fed’s choice of triggers, except that Williamson is already on record in opposition to NGDP targeting.

In reply to a comment on this post, Williamson made the following illuminating observation:

Read James Tobin’s paper, “How Dead is Keynes?” referenced in my previous post. He was writing in June 1977. The unemployment rate is 7.2%, the cpi inflation rate is 6.7%, and he’s complaining because he thinks the unemployment rate is disastrously high. He wants more accommodation. Today, I think we understand the reasons that the unemployment rate was high at the time, and we certainly don’t think that monetary policy was too tight in mid-1977, particularly as inflation was about to take off into the double-digit range. Today, I don’t think the labor market conditions we are looking at are the result of sticky price/wage inefficiencies, or any other problem that monetary policy can correct.

The unemployment rate in 1977 was 7.2%, at least one-half a percentage point less than the current rate, and the cpi inflation rate was 6.7% nearly 5% higher than the current rate. Just because Tobin was overly disposed toward monetary expansion in 1977 when unemployment was less and inflation higher than they are now, it does not follow that monetary expansion now would be as misguided as it was in 1977. Williamson is convinced that the labor market is now roughly in equilibrium, so that monetary expansion would lead us away from, not toward, equilibrium. Perhaps it would, but most informed observers simply don’t share Williamson’s intuition that the current state of the economy is not that far from equilibrium. Unless you buy that far-from-self-evident premise, the case for monetary expansion is hard to dispute.  Nevertheless, despite his current unhappiness, I am not so sure that Williamson will be as upset with what the actual policy that the Fed is going to implement as he seems to think he will be.  The Fed is moving in the right direction, but is only taking baby steps.

PS I see that Williamson has now posted his reaction to the Fed’s statement.  Evidently, he is not pleased.  Perhaps I will have something more to say about that tomorrow.

Economy, Heal Thyself

Lately, some smart economists (Eli Dourado backed up by Larry White, George Selgin, and Tyler Cowen) have been questioning whether it is plausible, four years after the US economy was hit with a severe negative shock to aggregate demand, and about three and a half years since aggregate demand stopped falling (nominal GDP subsequently growing at about a 4% annual rate), that the reason for persistent high unemployment and anemic growth in real output is that nominal aggregate demand has been growing too slowly. Even conceding that the 4% growth in nominal GDP was too slow to generate a rapid recovery from the original shock, they still ask why almost four years after hitting bottom, we should assume that slow growth in real GDP and persistent high unemployment are the result of deficient aggregate demand rather than the result of some underlying real disturbance, such as a massive misallocation of resources and capital induced by the housing bubble from 2002 to 2006. In other words, even if it was an aggregated demand shock that caused a sharp downturn in 2008-09, and even if insufficient aggregate demand growth unnecessarily weakened and prolonged the recovery, what reason is there to assume that the economy could not, by now, have adjusted to a slightly lower rate of growth in nominal GDP 4% (compared to the 5 to 5.5% that characterized the period preceding the 2008 downturn). As Eli Dourado puts it:

If we view the recession as a purely nominal shock, then monetary stimulus only does any good during the period in which the economy is adjusting to the shock. At some point during a recession, people’s expectations about nominal flows get updated, and prices, wages, and contracts adjust. After this point, monetary stimulus doesn’t help.

Thus, Dourado,White, Selgin, and Cowen want to know why an economy not afflicted by some deep structural, (i.e. real) problems would not have bounced back to its long-term trend of real output and employment after almost four years of steady 4% nominal GDP growth. Four percent growth in nominal GDP may have been too stingy, but why should we believe that 4% nominal GDP growth would not, in the long run, provide enough aggregate demand to allow an eventual return to the economy’s long-run real growth path?  And if one concedes that a steady rate of 4% growth in nominal GDP would eventually get the economy back on its long-run real growth path, why should we assume that four years is not enough time to get there?

Well, let me respond to that question with one of my own: what is the theoretical basis for assuming that an economy subjected to a very significant nominal shock that substantially reduces real output and employment would ever recover from that shock and revert back to its previous growth path? There is, I suppose, a presumption that markets equilibrate themselves through price adjustments, prices adjusting in response to excess demands and supplies until markets again clear. But there is a fallacy of composition at work here. Supply and demand curves are always drawn for a single market. The partial-equilibrium analysis that we are taught in econ 101 operates based on the implicit assumption that all markets other than the one under consideration are in equilibrium. (That is actually a logically untenable assumption, because, according to Walras’s Law, if one market is out of equilibrium at least one other market must also be out of equilibrium, but let us not dwell on that technicality.) But after an economy-wide nominal shock, the actual adjustment process involves not one market, but many (if not most, or even all) markets are out of equilibrium. When many markets are out of equilibrium, the adjustment process is much more problematic than under the assumptions of the partial-equilibrium analysis that we are so accustomed to. Just because the adjustment process that brings a single isolated market back from disequilibrium to equilibrium seems straightforward, we are not necessarily entitled to assume that there is an equivalent adjustment process from an economy-wide disequilibrium in which many, most, or all, markets are starting from a position of disequilibrium. A price adjustment in any one market will, in general, affect demands and supplies in at least some other markets. If only a single market is out of equilibrium, the effects on other markets of price and quantity adjustment in that one market are likely to be small enough, so that those effects on other markets can be safely ignored. But when many, most, or all, markets are in disequilibrium, the adjustments in some markets may aggravate the disequilibrium in other markets, setting in motion an endless series of adjustments that may – but may not! — lead the economy back to equilibrium. We just don’t know. And the uncertainty about whether equilibrium will be restored becomes even greater, when one of the markets out of equilibrium is the market for labor, a market in which income effects are so strong that they inevitably have major repercussions on all other markets.

Dourado et al. take it for granted that people’s expectations about nominal flows get updatd, and that prices, wages, and contracts adjust. But adjustment is one thing; equilibration is another. It is one thing to adjust expectations about a market in disequilibrium when all or most markets ar ein or near equilibrium; it is another to adjust expectations when markets are all out of equilibrium. Real interest rates, as very imperfectly approximated by TIPS, seem to have been falling steadily since early 2011 reflecting increasing pessimism about future growth in the economy. To overcome the growing entrepreneurial pessimism underlying the fall in real interest rates, it would have been necessary for workers to have accepted wage cuts far below their current levels. That scenario seems wildly unrealistic under any conceivable set of conditions. But even if the massive wage cuts necessary to induce a substantial increase in employment were realistic, wage cuts of that magnitude could have very unpredictable repercussions on consumption spending and prices, potentially setting in motion a destructive deflationary spiral. Dourado assumes that updating expectations about nominal flows, and the adjustments of prices and wages and contracts lead to equilibrium – that the short run is short. But that is question begging no less than those who look at slow growth and high unemployment and conclude that the economy is operating below its capacity. Dourado is sure that the economy has to return to equilibrium in a finite period of time, and I am sure that if the economy were in equilibrium real output would be growing at least 3% a year, and unemployment would be way under 8%. He has no more theoretical ground for his assumption than I do for mine.

Dourado challenges supporters of further QE to make “a broadly falsifiable claim about how long the short run lasts.” My response is that there is no theory available from which to deduce such a falsifiable claim. And as I have pointed out a number of times, no less an authority than F. A. Hayek demonstrated in his 1937 paper “Economics and Knowledge” that there is no economic theory that entitles us to conclude that the conditions required for an intertemporal equilibrium are in fact ever satisfied, or even that there is a causal tendency for them to be satisfied. All we have is some empirical evidence that economies from time to time roughly approximate such states. But that certainly does not entitle us to assume that any lapse from such a state will be spontaneously restored in a finite period of time.

Do we know that QE will work? Do we know that QE will increase real growth and reduce unemployment? No, but we do have a lot of evidence that monetary policy has succeeded in increasing output and employment in the past by changing expectations of the future price-level path. To assume that the current state of the economy is an equilibrium when unemployment is at a historically high level and inflation at a historically low level seems to me just, well, irresponsible.

So Many QE-Bashers, So Little Time

Both the Financial Times and the Wall Street Journal have been full of articles and blog posts warning of the ill-effects of QE3. In my previous post, I discussed the most substantial of the recent anti-QE discussions. I was going to do a survey of some of the others that I have seen, but today all I can manage is a comment on one of them.

In the Wall Street Journal, Benn Steil, director of international economics at the Council of Foreign Relations, winner of the 2010 Hayek Book Award for his book Money, Markets, and Sovereignty (co-authored with Manuel Hinds), and Dinah Walker, an analyst at the CFR, complain that since 2000, the Fed has stopped following the Taylor Rule, to which it supposedly adhered from 1987 to 1999 during a period of exceptional monetary stability, and, from 2000 to the present, the Fed supposedly abandoned the rule. This is a familiar argument endlessly repeated by none other than John Taylor, himself. But as I recently pointed out, Taylor has implicitly at least, conceded that the supposedly non-discretionary, uncertainty-minimizing, Taylor rule comes in multiple versions, and, notwithstanding Taylor’s current claim that he prefers the version that he originally proposed in 1993, he is unable to provide any compelling reason – other than his own exercise of discretion — why that version is entitled to any greater deference than alternative versions of the rule.

Despite the inability of the Taylor rule to specify a unique value, or even a narrow range of values, of the target for the Fed Funds rate, Steil and Walker, presumably taking Taylor’s preferred version as canonical, make the following assertion about the difference between how the Fed Funds rate was set in the 1987-99 period compared how it was set in the 2000-08 period.

Between 1987, when Alan Greenspan became Fed chairman, and 1999 a neat approximation of how the Fed responded to market signals was captured by the Taylor Rule. Named for John Taylor, the Stanford economist who introduced the rule in 1993, it stipulated that the fed-funds rate, which banks use to set interest rates, should be nudged up or down proportionally to changes in inflation and economic output. By our calculations, the Taylor Rule explained 69% of the variation in the fed-funds rate over that period. (In the language of statistics, the relationship between the rule and the rate had an R-squared of .69.)

Then came a dramatic change. Between 2000 and 2008, when the Fed cut the fed-funds target rate to near zero, the R-squared collapsed to .35. The Taylor Rule was clearly no longer guiding U.S. monetary policy.

This is a pretty extravagant claim. The 1987-99 period was marked by a single recession, a recession triggered largely by a tightening of monetary policy when inflation was rising above the 3.5 to 4 percent range that was considered acceptable after the Volcker disinflation in the early 1980s. So the 1992 recession was triggered by the application of Taylor rule, and the recession triggered a response that was consistent with the Taylor rule. The 2000-08 period was marked by two recessions, both of which were triggered by financial stresses, not rising inflation.  To say that the Fed abandoned a rule that it was following in the earlier period is simply to say that circumstances that the Fed did not have to face in the 1987-99 period confronted the Fed in the 2000-08 period. The difference in the R-squared found by Steil and Watson may indicate no more than the more variable economic environment in the latter period than the former.

As I pointed out in my recent post (hyper-linked above) on the multiple Taylor rules, following the Taylor rule in 2008 would have meant targeting the Fed Funds rate for most of 2008 at an even higher level than the disastrously high rate that the Fed was targeting in 2008 while the economy was already in recession and entering, even before the Lehman debacle, one of the sharpest contractions since World War II. Indeed, Taylor’s preferred version implied that the Fed should have increased (!) the Fed Funds rate in the spring of 2008.

Steil and Watkins attribute the Fed’s deviation from the Taylor rule to an implicit strategy of targeting asset prices.

In a now-famous speech invoking the analogy of a “helicopter drop of money,” [Bernanke] argued that monetary interventions that boosted asset values could help combat deflation risk by lowering the cost of capital and improving the balance sheets of potential borrowers.

Mr. Bernanke has since repeatedly highlighted asset-price movements as a measure of policy success. In 2003 he argued that “unanticipated changes in monetary policy affect stock prices . . . by affecting the perceived riskiness of stocks,” suggesting an explicit reason for using monetary policy to affect the public’s appetite for stocks. And this past February he noted that “equity prices [had] risen significantly” since the Fed began reinvesting maturing securities.

This is a tendentious misreading of Bernanke’s statements. He is not targeting stock prices, but he is arguing that movements in stock prices are correlated with expectations about the future performance of the economy, so that rising stock prices in response to a policy decision of the Fed provide some evidence that the policy has improved economic conditions. Why should that be controversial?

Steil and Watkins then offer a strange statistical “test” of their theory that the Fed is targeting stock prices.

Between 2000 and 2008, the level of household risk aversion—which we define as the ratio of household currency holdings, bank deposits and money-market funds to total household financial assets—explained a remarkable 77% of the variation in the fed-funds rate (an R-squared of .77). In other words, the Fed was behaving as if it were targeting “risk on, risk off,” moving interest rates to push investors toward or away from risky assets.

What Steil and Watkins are measuring by their “ratio of household risk aversion” is liquidity preference or the demand for money. They seem to have a problem with the Fed acting to accommodate the public’s demand for liquidity. The alternative to the Fed’s accommodating a demand for liquidity is to have that demand manifested in deflation. That’s what happened in 1929-33, when the Fed deliberately set out to combat stock-market speculation by raising interest rates until the stock market crashed, and only then reduced rates to 2%, which, in an environment of rapidly falling prices, was still a ferociously tight monetary policy. The .77 R-squared that Steil and Watkins find reflects the fact, for which we can all be thankful, that the Fed has at least prevented a deflationary catastrophe from overtaking the US economy.

The fact is that what mainly governs the level of stock prices is expectations about the future performance of the economy. If the Fed takes seriously its dual mandate, then it is necessarily affecting the level of stock prices. That is something very different from the creation of a “Bernanke put” in which the Fed is committed to “ease monetary policy whenever there is a stock market correction.” I don’t know why some people have a problem understanding the difference.  But they do, or at least act as if they do.

Bullard Defends the Indefensible

James Bullard, the President of the St. Louis Federal Reserve Bank, is a very fine economist, having worked his way up the ranks at the St. Louis Fed after joining the research department at the St. Louis Fed in 1990, as newly minted Ph. D. from Indiana University, publishing his research widely in leading journals (and also contributing an entry on “learning” to Business Cycles and Depressions: An Encyclopedia which I edited). Bullard may just be the most centrist member of the FOMC (see here), and his pronouncements on monetary policy are usually measured and understated, eschewing the outspoken style of some of his colleagues (especially the three leading inflation hawks on the FOMC, Charles Plosser, Jeffrey Lacker, and Richard Fisher).

But even though Bullard is a very sensible and knowledgeable guy, whose views I take seriously, I am having a lot of trouble figuring out what he was up to in the op-ed piece he published in today’s Financial Times (“Patience needed for Fed’s dual mandate”) in which he argued that the fact that the Fed has persistently undershot its inflation target while unemployment has been way over any reasonable the rate consistent with full employment, is no reason for the Fed to change its policy toward greater ease.  In other words, Bullard sees no reason why the Fed should now  seek, or at least tolerate, an inflation rate that temporarily meets or exceeds the Fed’s current 2% target. In a recent interview, Bullard stated that he would not have supported the decision to embark on QE3.

To support his position, Bullard cites a 2007 paper in the American Economic Review by Smets and Wouters “Shocks and Frictions in US Business Cycles.” The paper estimates a DSGE model of the US economy and uses it to generate out-of-sample predictions that are comparable to those of a Bayesian vector autoregression model. Here’s how Bullard characterizes the rationale for QE3 and explains how that rationale is undercut by the results of the Smets and Wouters paper.

The Fed has a directive that calls for it to maintain stable prices as well as maximum employment, along with moderate long-term interest rates. Since unemployment is high by historical standards (8.1 per cent), observers argue the Fed must not be “maximising employment”. Inflation, as measured by the personal consumption expenditures deflator price index, has increased to about 1.3 per cent in the year to July. The Fed’s target is 2 per cent, so critics can say the Fed has not met this part of the mandate. When unemployment is above the natural rate, they say, inflation should be above the inflation target, not below.

I disagree. So does the economic literature. Here is my account of where we are: the US economy was hit by a large shock in 2008 and 2009. This lowered output and employment far below historical trend levels while reducing inflation substantially below 2 per cent. The question is: how do we expect these variables to return to their long-run or targeted values under monetary policy? That is, should the adjustment path be relatively smooth, or should we expect some overshooting?

Evidence, for example a 2007 paper by Frank Smets and Raf Wouters, suggests that it is reasonable to believe that output, employment and inflation will return to their long-run or targeted values slowly and steadily. In the jargon, we refer to this type of convergence as “monotonic”: a shock knocks the variables off their long-run values but they gradually return, without overshooting on the other side. Wild dynamics would be disconcerting.

What is wrong with Bullard’s argument? Well, just because Smets and Wouters estimated a DSGE model in 2007 that they were able to use to generate “good” out-of-sample predictions does not prove that the model would generate good out-of-sample predictions for 2008-2012. Maybe it does, I don’t know. But Bullard is a very smart economist, and he has a bunch of very smart economists economists working for him. Have they used the Smets and Wouters DSGE model to generate out-of-sample predictions for 2008 to 2012? I don’t know. But if they have, why doesn’t Bullard even mention what they found?

Bullard says that the Smets and Wouters paper “suggests that it is reasonable to believe that output, employment and inflation will return to their long-run or targeted values slowly and steadily.” Even if we stipulate to that representation of what the paper shows, that is setting the bar very low. Bullard’s representation calls to mind a famous, but often misunderstood, quote by a dead economist.

The long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is past the ocean is flat again.

Based on a sample that included no shock to output, employment, and inflation of comparable magnitude to the shock experienced in 2008-09, Bullard is prepared to opine confidently that we are now on a glide path headed toward the economy’s potential output, toward full employment, and toward 2% inflation.  All we need is patience. But Bullard provides no evidence, not even a simulation based on the model that he says that he is relying on, that would tell us how long it will take to reach the end state whose realization he so confidently promises.  Nor does he provide any evidence, not even a simulation based on the Smets-Wouters model — a model that, as far as I know, has not yet achieved anything like canonical status — estimating what the consequences of increasing the Fed’s inflation target would be, much less the consequences of changing the policy rule underlying Smets-Wouters model from inflation targeting to something like a price-level target or a NGDP target. And since the Lucas Critique tells us that a simulation based on a sample in which one policy rule was being implemented cannot be relied upon to predict the consequences of adopting a different policy rule from that used in the original estimation, I have no idea how Bullard can be so confident about what the Smets and Wouters paper can teach us about adopting QE3.

PS  In the comment below Matt Rognlie, with admirable clearness and economy, fleshes out my intuition that the Smets-Wouters paper provides very little empirical support for the proposition that Bullard is arguing for in his FT piece.  Many thanks and kudos to Matt for his contribution.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,629 other followers

Follow Uneasy Money on WordPress.com