Archive for the 'Lucas Critique' Category

Richard Lipsey and the Phillips Curve Redux

Almost three and a half years ago, I published a post about Richard Lipsey’s paper “The Phillips Curve and the Tyranny of an Assumed Unique Macro Equilibrium.” The paper originally presented at the 2013 meeting of the History of Econmics Society has just been published in the Journal of the History of Economic Thought, with a slightly revised title “The Phillips Curve and an Assumed Unique Macroeconomic Equilibrium in Historical Context.” The abstract of the revised published version of the paper is different from the earlier abstract included in my 2013 post. Here is the new abstract.

An early post-WWII debate concerned the most desirable demand and inflationary pressures at which to run the economy. Context was provided by Keynesian theory devoid of a full employment equilibrium and containing its mainly forgotten, but still relevant, microeconomic underpinnings. A major input came with the estimates provided by the original Phillips curve. The debate seemed to be rendered obsolete by the curve’s expectations-augmented version with its natural rate of unemployment, and associated unique equilibrium GDP, as the only values consistent with stable inflation. The current behavior of economies with the successful inflation targeting is inconsistent with this natural-rate view, but is consistent with evolutionary theory in which economies have a wide range of GDP-compatible stable inflation. Now the early post-WWII debates are seen not to be as misguided as they appeared to be when economists came to accept the assumptions implicit in the expectations-augmented Phillips curve.

Publication of Lipsey’s article nicely coincides with Roger Farmer’s new book Prosperity for All which I discussed in my previous post. A key point that Roger makes is that the assumption of a unique equilibrium which underlies modern macroeconomics and the vertical long-run Phillips Curve is neither theoretically compelling nor consistent with the empirical evidence. Lipsey’s article powerfully reinforces those arguments. Access to Lipsey’s article is gated on the JHET website, so in addition to the abstract, I will quote the introduction and a couple of paragraphs from the conclusion.

One important early post-WWII debate, which took place particularly in the UK, concerned the demand and inflationary pressures at which it was best to run the economy. The context for this debate was provided by early Keynesian theory with its absence of a unique full-employment equilibrium and its mainly forgotten, but still relevant, microeconomic underpinnings. The original Phillips Curve was highly relevant to this debate. All this changed, however, with the introduction of the expectations-augmented version of the curve with its natural rate of unemployment, and associated unique equilibrium GDP, as the only values consistent with a stable inflation rate. This new view of the economy found easy acceptance partly because most economists seem to feel deeply in their guts — and their training predisposes them to do so — that the economy must have a unique equilibrium to which market forces inevitably propel it, even if the approach is sometimes, as some believe, painfully slow.

The current behavior of economies with successful inflation targeting is inconsistent with the existence of a unique non-accelerating-inflation rate of unemployment (NAIRU) but is consistent with evolutionary theory in which the economy is constantly evolving in the face of path-dependent, endogenously generated, technological change, and has a wide range of unemployment and GDP over which the inflation rate is stable. This view explains what otherwise seems mysterious in the recent experience of many economies and makes the early post-WWII debates not seem as silly as they appeared to be when economists came to accept the assumption of a perfectly inelastic, long-run Phillips curve located at the unique equilibrium level of unemployment. One thing that stands in the way of accepting this view, however, the tyranny of the generally accepted assumption of a unique, self-sustaining macroeconomic equilibrium.

This paper covers some of the key events in the theory concerning, and the experience of, the economy’s behavior with respect to inflation and unemployment over the post-WWII period. The stage is set by the pressure-of-demand debate in the 1950s and the place that the simple Phillips curve came to play in it. The action begins with the introduction of the expectations-augmented Phillips curve and the acceptance by most Keynesians of its implication of a unique, self-sustaining macro equilibrium. This view seemed not inconsistent with the facts of inflation and unemployment until the mid-1990s, when the successful adoption of inflation targeting made it inconsistent with the facts. An alternative view is proposed, on that is capable of explaining current macro behavior and reinstates the relevance of the early pressure-of-demand debate. (pp. 415-16).

In reviewing the evidence that stable inflation is consistent with a range of unemployment rates, Lipsey generalizes the concept of a unique NAIRU to a non-accelerating-inflation band of unemployment (NAIBU) within which multiple rates of unemployment are consistent with a basically stable expected rate of inflation. In an interesting footnote, Lipsey addresses a possible argument against the relevance of the empirical evidence for policy makers based on the Lucas critique.

Some might raise the Lucas critique here, arguing that one finds the NAIBU in the data because policymakers are credibly concerned only with inflation. As soon as policymakers made use of the NAIBU, the whole unemployment-inflation relation that has been seen since the mid-1990s might change or break. For example, unions, particularly in the European Union, where they are typically more powerful than in North America, might alter their behavior once they became aware that the central bank was actually targeting employment levels directly and appeared to have the power to do so. If so, the Bank would have to establish that its priorities were lexicographically ordered with control of inflation paramount so that any level-of-activity target would be quickly dropped whenever inflation threatened to go outside of the target bands. (pp. 426-27)

I would just mention in this context that in this 2013 post about the Lucas critique, I pointed out that in the paper in which Lucas articulated his critique, he assumed that the only possible source of disequilibrium was a mistake in expected inflation. If everything else is working well, causing inflation expectations to be incorrect will make things worse. But if there are other sources of disequilibrium, it is not clear that incorrect inflation expectations will make things worse; they could make things better. That is a point that Lipsey and Kelvin Lancaster taught the profession in a classic article “The General Theory of Second Best,” 20 years before Lucas published his critique of econometric policy evaluation.

I conclude by quoting Lipsey’s penultimate paragraph (the final paragraph being a quote from Lipsey’s paper on the Phillips Curve from the Blaug and Lloyd volume Famous Figures and Diagrams in Economics which I quoted in full in my 2013 post.

So we seem to have gone full circle from the early Keynesian view in which there was no unique level of GDP to which the economy was inevitably drawn, through a simple Phillips curve with its implied trade-0ff, to an expectations-augmented Phillips curve (or any of its more modern equivalents) with its associated unique level of GDP, and finally back to the early Keynesian view in which policymakers had an option as to the average pressure of aggregate demand at which economic activity could be sustained. However, the modern debated about whether to aim for [the high or low range of stable unemployment rates] is not a debate about inflation versus growth, as it was in the 1950s, but between those who would risk an occasional rise of inflation above the target band as the price of getting unemployment as low as possible and those who would risk letting unemployment fall below that indicated by the lower boundary of the NAIBU  as the price of never risking an acceleration of inflation above the target rate. (p. 427)

Paul Romer on Modern Macroeconomics, Or, the “All Models Are False” Dodge

Paul Romer has been engaged for some time in a worthy campaign against the travesty of modern macroeconomics. A little over a year ago I commented favorably about Romer’s takedown of Robert Lucas, but I also defended George Stigler against what I thought was an unfair attempt by Romer to identify George Stigler as an inspiration and role model for Lucas’s transgressions. Now just a week ago, a paper based on Romer’s Commons Memorial Lecture to the Omicron Delta Epsilon Society, has become just about the hottest item in the econ-blogosophere, even drawing the attention of Daniel Drezner in the Washington Post.

I have already written critically about modern macroeconomics in my five years of blogging, and here are some links to previous posts (link, link, link, link). It’s good to see that Romer is continuing to voice his criticisms, and that they are gaining a lot of attention. But the macroeconomic hierarchy is used to criticism, and has its standard responses to criticism, which are being dutifully deployed by defenders of the powers that be.

Romer’s most effective rhetorical strategy is to point out that the RBC core of modern DSGE models posit unobservable taste and technology shocks to account for fluctuations in the economic time series, but that these taste and technology shocks are themselves simply inferred from the fluctuations in the times-series data, so that the entire structure of modern macroeconometrics is little more than an elaborate and sophisticated exercise in question-begging.

In this post, I just want to highlight one of the favorite catch-phrases of modern macroeconomics which serves as a kind of default excuse and self-justification for the rampant empirical failures of modern macroeconomics (documented by Lipsey and Carlaw as I showed in this post). When confronted by evidence that the predictions of their models are wrong, the standard and almost comically self-confident response of the modern macroeconomists is: All models are false. By which the modern macroeconomists apparently mean something like: “And if they are all false anyway, you can’t hold us accountable, because any model can be proven wrong. What really matters is that our models, being microfounded, are not subject to the Lucas Critique, and since all other models than ours are not micro-founded, and, therefore, being subject to the Lucas Critique, they are simply unworthy of consideration. This is what I have called methodological arrogance. That response is simply not true, because the Lucas Critique applies even to micro-founded models, those models being strictly valid only in equilibrium settings and being unable to predict the adjustment of economies in the transition between equilibrium states. All models are subject to the Lucas Critique.

Here is Romer’s take:

In response to the observation that the shocks are imaginary, a standard defense invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that “the more significant the theory, the more unrealistic the assumptions (p.14).” More recently, “all models are false” seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favorite.

Friedman’s methodological assertion would have been correct had Friedman substituted “simple” for “unrealistic.” Sometimes simplifications are unrealistic, but they don’t have to be. A simplification is a generalization of something complicated. By simplifying, we can transform a problem that had been too complex to handle into a problem more easily analyzed. But such simplifications aren’t necessarily unrealistic. To say that all models are false is simply a dodge to avoid having to account for failure. The excuse of course is that all those other models are subject to the Lucas Critique, so my model wins. But your model is subject to the Lucas Critique even though you claim it’s not, so even according to the rules you have arbitrarily laid down, you don’t win.

So I was just curious about where the little phrase “all models are false” came from. I was expecting that Karl Popper might have said it, in which case to use the phrase as a defense mechanism against empirical refutation would have been a particularly fraudulent tactic, because it would have been a perversion of Popper’s methodological stance, which was to force our theoretical constructs to face up to, not to insulate it from, empirical testing. But when I googled “all theories are false” what I found was not Popper, but the British statistician, G. E. P. Box who wrote in his paper “Science and Statistics” based on his R. A. Fisher Memorial Lecture to the American Statistical Association: “All models are wrong.” Here’s the exact quote:

Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.

Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad. Pure mathematics is concerned with propositions like “given that A is true, does B necessarily follow?” Since the statement is a conditional one, it has nothing whatsoever to do with the truth of A nor of the consequences B in relation to real life. The pure mathematician, acting in that capacity, need not, and perhaps should not, have any contact with practical matters at all.

In applying mathematics to subjects such as physics or statistics we make tentative assumptions about the real world which we know are false but which we believe may be useful nonetheless. The physicist knows that particles have mass and yet certain results, approximating what really happens, may be derived from the assumption that they do not. Equally, the statistician knows, for example, that in nature there never was a normal distribution, there never was a straight line, yet with normal and linear assumptions, known to be false, he can often derive results which match, to a useful approximation, those found in the real world. It follows that, although rigorous derivation of logical consequences is of great importance to statistics, such derivations are necessarily encapsulated in the knowledge that premise, and hence consequence, do not describe natural truth.

It follows that we cannot know that any statistical technique we develop is useful unless we use it. Major advances in science and in the science of statistics in particular, usually occur, therefore, as the result of the theory-practice iteration.

One of the most annoying conceits of modern macroeconomists is the constant self-congratulatory references to themselves as scientists because of their ostentatious use of axiomatic reasoning, formal proofs, and higher mathematical techniques. The tiresome self-congratulation might get toned down ever so slightly if they bothered to read and take to heart Box’s lecture.

What’s Wrong with Monetarism?

UPDATE: (05/06): In an email Richard Lipsey has chided me for seeming to endorse the notion that 1970s stagflation refuted Keynesian economics. Lipsey rightly points out that by introducing inflation expectations into the Phillips Curve or the Aggregate Supply Curve, a standard Keynesian model is perfectly capable of explaining stagflation, so that it is simply wrong to suggest that 1970s stagflation constituted an empirical refutation of Keynesian theory. So my statement in the penultimate paragraph that the k-percent rule

was empirically demolished in the 1980s in a failure even more embarrassing than the stagflation failure of Keynesian economics.

should be amended to read “the supposed stagflation failure of Keynesian economics.”

Brad DeLong recently did a post (“The Disappearance of Monetarism”) referencing an old (apparently unpublished) paper of his following up his 2000 article (“The Triumph of Monetarism”) in the Journal of Economic Perspectives. Paul Krugman added his own gloss on DeLong on Friedman in a post called “Why Monetarism Failed.” In the JEP paper, DeLong argued that the New Keynesian policy consensus of the 1990s was built on the foundation of what DeLong called “classic monetarism,” the analytical core of the doctrine developed by Friedman in the 1950s and 1960s, a core that survived the demise of what he called “political monetarism,” the set of factual assumptions and policy preferences required to justify Friedman’s k-percent rule as the holy grail of monetary policy.

In his follow-up paper, DeLong balanced his enthusiasm for Friedman with a bow toward Keynes, noting the influence of Keynes on both classic and political monetarism, arguing that, unlike earlier adherents of the quantity theory, Friedman believed that a passive monetary policy was not the appropriate policy stance during the Great Depression; Friedman famously held the Fed responsible for the depth and duration of what he called the Great Contraction, because it had allowed the US money supply to drop by a third between 1929 and 1933. This was in sharp contrast to hard-core laissez-faire opponents of Fed policy, who regarded even the mild and largely ineffectual steps taken by the Fed – increasing the monetary base by 15% – as illegitimate interventionism to obstruct the salutary liquidation of bad investments, thereby postponing the necessary reallocation of real resources to more valuable uses. So, according to DeLong, Friedman, no less than Keynes, was battling against the hard-core laissez-faire opponents of any positive action to speed recovery from the Depression. While Keynes believed that in a deep depression only fiscal policy would be effective, Friedman believed that, even in a deep depression, monetary policy would be effective. But both agreed that there was no structural reason why stimulus would necessarily counterproductive; both rejected the idea that only if the increased output generated during the recovery was of a particular composition would recovery be sustainable.

Indeed, that’s why Friedman has always been regarded with suspicion by laissez-faire dogmatists who correctly judged him to be soft in his criticism of Keynesian doctrines, never having disputed the possibility that “artificially” increasing demand – either by government spending or by money creation — in a deep depression could lead to sustainable economic growth. From the point of view of laissez-faire dogmatists that concession to Keynesianism constituted a total sellout of fundamental free-market principles.

Friedman parried such attacks on the purity of his free-market dogmatism with a counterattack against his free-market dogmatist opponents, arguing that the gold standard to which they were attached so fervently was itself inconsistent with free-market principles, because, in virtually all historical instances of the gold standard, the monetary authorities charged with overseeing or administering the gold standard retained discretionary authority allowing them to set interest rates and exercise control over the quantity of money. Because monetary authorities retained substantial discretionary latitude under the gold standard, Friedman argued that a gold standard was institutionally inadequate and incapable of constraining the behavior of the monetary authorities responsible for its operation.

The point of a gold standard, in Friedman’s view, was that it makes it costly to increase the quantity of money. That might once have been true, but advances in banking technology eventually made it easy for banks to increase the quantity of money without any increase in the quantity of gold, making inflation possible even under a gold standard. True, eventually the inflation would have to be reversed to maintain the gold standard, but that simply made alternative periods of boom and bust inevitable. Thus, the gold standard, i.e., a mere obligation to convert banknotes or deposits into gold, was an inadequate constraint on the quantity of money, and an inadequate systemic assurance of stability.

In other words, if the point of a gold standard is to prevent the quantity of money from growing excessively, then, why not just eliminate the middleman, and simply establish a monetary rule constraining the growth in the quantity of money. That was why Friedman believed that his k-percent rule – please pardon the expression – trumped the gold standard, accomplishing directly what the gold standard could not accomplish, even indirectly: a gradual steady increase in the quantity of money that would prevent monetary-induced booms and busts.

Moreover, the k-percent rule made the monetary authority responsible for one thing, and one thing alone, imposing a rule on the monetary authority prescribing the time path of a targeted instrument – the quantity of money – over which the monetary authority has direct control: the quantity of money. The belief that the monetary authority in a modern banking system has direct control over the quantity of money was, of course, an obvious mistake. That the mistake could have persisted as long as it did was the result of the analytical distraction of the money multiplier: one of the leading fallacies of twentieth-century monetary thought, a fallacy that introductory textbooks unfortunately continue even now to foist upon unsuspecting students.

The money multiplier is not a structural supply-side variable, it is a reduced-form variable incorporating both supply-side and demand-side parameters, but Friedman and other Monetarists insisted on treating it as if it were a structural — and a deep structural variable at that – supply variable, so that it no less vulnerable to the Lucas Critique than, say, the Phillips Curve. Nevertheless, for at least a decade and a half after his refutation of the structural Phillips Curve, demonstrating its dangers as a guide to policy making, Friedman continued treating the money multiplier as if it were a deep structural variable, leading to the Monetarist forecasting debacle of the 1980s when Friedman and his acolytes were confidently predicting – over and over again — the return of double-digit inflation because the quantity of money was increasing for most of the 1980s at double-digit rates.

So once the k-percent rule collapsed under an avalanche of contradictory evidence, the Monetarist alternative to the gold standard that Friedman had persuasively, though fallaciously, argued was, on strictly libertarian grounds, preferable to the gold standard, the gold standard once again became the default position of laissez-faire dogmatists. There was to be sure some consideration given to free banking as an alternative to the gold standard. In his old age, after winning the Nobel Prize, F. A. Hayek introduced a proposal for direct currency competition — the elimination of legal tender laws and the like – which he later developed into a proposal for the denationalization of money. Hayek’s proposals suggested that convertibility into a real commodity was not necessary for a non-legal tender currency to have value – a proposition which I have argued is fallacious. So Hayek can be regarded as the grandfather of crypto currencies like the bitcoin. On the other hand, advocates of free banking, with a few exceptions like Earl Thompson and me, have generally gravitated back to the gold standard.

So while I agree with DeLong and Krugman (and for that matter with his many laissez-faire dogmatist critics) that Friedman had Keynesian inclinations which, depending on his audience, he sometimes emphasized, and sometimes suppressed, the most important reason that he was unable to retain his hold on right-wing monetary-economics thinking is that his key monetary-policy proposal – the k-percent rule – was empirically demolished in a failure even more embarrassing than the stagflation failure of Keynesian economics. With the k-percent rule no longer available as an alternative, what’s a right-wing ideologue to do?

Anyone for nominal gross domestic product level targeting (or NGDPLT for short)?

All New Classical Models Are Subject to the Lucas Critique

Almost 40 years ago, Robert Lucas made a huge, but not quite original, contribution, when he provided a very compelling example of how the predictions of the then standard macroeconometric models used for policy analysis were inherently vulnerable to shifts in the empirically estimated parameters contained in the models, shifts induced by the very policy change under consideration. Insofar as those models could provide reliable forecasts of the future course of the economy, it was because the policy environment under which the parameters of the model had been estimated was not changing during the time period for which the forecasts were made. But any forecast deduced from the model conditioned on a policy change would necessarily be inaccurate, because the policy change itself would cause the agents in the model to alter their expectations in light of the policy change, causing the parameters of the model to diverge from their previously estimated values. Lucas concluded that only models based on deep parameters reflecting the underlying tastes, technology, and resource constraints under which agents make decisions could provide a reliable basis for policy analysis.

The Lucas critique undoubtedly conveyed an important insight about how to use econometric models in analyzing the effects of policy changes, and if it did no more than cause economists to be more cautious in offering policy advice based on their econometric models and policy makers to more skeptical about the advice they got from economists using such models, the Lucas critique would have performed a very valuable public service. Unfortunately, the lesson that the economics profession learned from the Lucas critique went far beyond that useful warning about the reliability of conditional forecasts potentially sensitive to unstable parameter estimates. In an earlier post, I discussed another way in which the Lucas Critique has been misapplied. (One responsible way to deal with unstable parameter estimates would be make forecasts showing a range of plausible outcome depending on how parameter estimates might change as a result of the policy change. Such an approach is inherently messy, and, at least in the short run, would tend to make policy makers less likely to pay attention to the policy advice of economists. But the inherent sensitivity of forecasts to unstable model parameters ought to make one skeptical about the predictions derived from any econometric model.)

Instead, the Lucas critique was used by Lucas and his followers as a tool by which to advance a reductionist agenda of transforming macroeconomics into a narrow slice of microeconomics, the slice being applied general-equilibrium theory in which the models required drastic simplification before they could generate quantitative predictions. The key to deriving quantitative results from these models is to find an optimal intertemporal allocation of resources given the specified tastes, technology and resource constraints, which is typically done by describing the model in terms of an optimizing representative agent with a utility function, a production function, and a resource endowment. A kind of hand-waving is performed via the rational-expectations assumption, thereby allowing the optimal intertemporal allocation of the representative agent to be identified as a composite of the mutually compatible optimal plans of a set of decentralized agents, the hand-waving being motivated by the Arrow-Debreu welfare theorems proving that any Pareto-optimal allocation can be sustained by a corresponding equilibrium price vector. Under rational expectations, agents correctly anticipate future equilibrium prices, so that market-clearing prices in the current period are consistent with full intertemporal equilibrium.

What is amazing – mind-boggling might be a more apt adjective – is that this modeling strategy is held by Lucas and his followers to be invulnerable to the Lucas critique, being based supposedly on deep parameters reflecting nothing other than tastes, technology and resource endowments. The first point to make – there are many others, but we needn’t exhaust the list – is that it is borderline pathological to convert a valid and important warning about how economic models may be subject to misunderstanding or misuse as a weapon with which to demolish any model susceptible of such misunderstanding or misuse as a prelude to replacing those models by the class of reductionist micromodels that now pass for macroeconomics.

But there is a second point to make, which is that the reductionist models adopted by Lucas and his followers are no less vulnerable to the Lucas critique than the models they replaced. All the New Classical models are explicitly conditioned on the assumption of optimality. It is only by positing an optimal solution for the representative agent that the equilibrium price vector can be inferred. The deep parameters of the model are conditioned on the assumption of optimality and the existence of an equilibrium price vector supporting that equilibrium. If the equilibrium does not obtain – the optimal plans of the individual agents or the fantastical representative agent becoming incapable of execution — empirical estimates of the parameters of the model parameters cannot correspond to the equilibrium values implied by the model itself. Parameter estimates are therefore sensitive to how closely the economic environment in which the parameters were estimated corresponded to conditions of equilibrium. If the conditions under which the parameters were estimated more nearly approximated the conditions of equilibrium than the period in which the model is being used to make conditional forecasts, those forecasts, from the point of view of the underlying equilibrium model, must be inaccurate. The Lucas critique devours its own offspring.

Barro and Krugman Yet Again on Regular Economics vs. Keynesian Economics

A lot of people have been getting all worked up about Paul Krugman’s acerbic takedown of Robert Barro for suggesting in a Wall Street Journal op-ed in 2011 that increased government spending would not stimulate the economy. Barro’s target was a claim by Agriculture Secretary Tom Vilsack that every additional dollar spent on food stamps would actually result in a net increase of $1.84 in total spending. This statement so annoyed Barro that, in a fit of pique, he wrote the following.

Keynesian economics argues that incentives and other forces in regular economics are overwhelmed, at least in recessions, by effects involving “aggregate demand.” Recipients of food stamps use their transfers to consume more. Compared to this urge, the negative effects on consumption and investment by taxpayers are viewed as weaker in magnitude, particularly when the transfers are deficit-financed.

Thus, the aggregate demand for goods rises, and businesses respond by selling more goods and then by raising production and employment. The additional wage and profit income leads to further expansions of demand and, hence, to more production and employment. As per Mr. Vilsack, the administration believes that the cumulative effect is a multiplier around two.

If valid, this result would be truly miraculous. The recipients of food stamps get, say, $1 billion but they are not the only ones who benefit. Another $1 billion appears that can make the rest of society better off. Unlike the trade-off in regular economics, that extra $1 billion is the ultimate free lunch.

How can it be right? Where was the market failure that allowed the government to improve things just by borrowing money and giving it to people? Keynes, in his “General Theory” (1936), was not so good at explaining why this worked, and subsequent generations of Keynesian economists (including my own youthful efforts) have not been more successful.

Sorry to brag, but it was actually none other than moi that (via Mark Thoma) brought this little gem to Krugman’s attention. In what is still my third most visited blog post, I expressed incredulity that Barro could ask where Is the market failure about a situation in which unemployment suddenly rises to more than double its pre-recession level. I also pointed out that Barro had himself previously acknowledged in a Wall Street Journal op-ed that monetary expansion could alleviate a cyclical increase in unemployment. If monetary policy (printing money on worthless pieces of paper) can miraculously reduce unemployment, why is out of the question that government spending could also reduce unemployment, especially when it is possible to view government spending as a means of transferring cash from people with unlimited demand for money to those unwilling to increase their holdings of cash? So, given Barro’s own explicit statement that monetary policy could be stimulative, it seemed odd for him to suggest, without clarification, that it would be a miracle if fiscal policy were effective.

Apparently, Krugman felt compelled to revisit this argument of Barro’s because of the recent controversy about extending unemployment insurance, an issue to which Barro made only passing reference in his 2011 piece. Krugman again ridiculed the idea that just because regular economics says that a policy will have adverse effects under “normal” conditions, the policy must be wrongheaded even in a recession.

But if you follow right-wing talk — by which I mean not Rush Limbaugh but the Wall Street Journal and famous economists like Robert Barro — you see the notion that aid to the unemployed can create jobs dismissed as self-evidently absurd. You think that you can reduce unemployment by paying people not to work? Hahahaha!

Quite aside from the fact that this ridicule is dead wrong, and has had a malign effect on policy, think about what it represents: it amounts to casually trashing one of the most important discoveries economists have ever made, one of my profession’s main claims to be useful to humanity.

Krugman was subsequently accused of bad faith in making this argument because he, like other Keynesians, has acknowledged that unemployment insurance tends to increase the unemployment rate. Therefore, his critics argue, it was hypocritical of Krugman to criticize Barro and the Wall Street Journal for making precisely the same argument that he himself has made. Well, you can perhaps accuse Krugman of being a bit artful in his argument by not acknowledging explicitly that a full policy assessment might in fact legitimately place some limit on UI benefits, but Krugman’s main point is obviously not to assert that “regular economics” is necessarily wrong, just that Barro and the Wall Street Journal are refusing to acknowledge that countercyclical policy of some type could ever, under any circumstances, be effective. Or, to put it another way, Krugman could (and did) easily agree that increasing UI will increases the natural rate of unemployment, but, in a recession, actual unemployment is above the natural rate, and UI can cause the actual rate to fall even as it causes the natural rate to rise.

Now Barro might respond that all he was really saying in his 2011 piece was that the existence of a government spending multiplier significantly greater than zero is not supported by the empirical evidenc. But there are two problems with that response. First, it would still not resolve the theoretical inconsistency in Barro’s argument that monetary policy does have magical properties in a recession with his position that fiscal policy has no such magical powers. Second, and perhaps less obviously, the empirical evidence on which Barro relies does not necessarily distinguish between periods of severe recession or depression and periods when the economy is close to full employment. If so, the empirical estimates of government spending multipliers are subject to the Lucas critique. Parameter estimates may not be stable over time, because those parameters may change depending on the cyclical phase of the economy. The multiplier at the trough of a deep business cycle may be much greater than the multiplier at close to full employment. The empirical estimates for the multiplier cited by Barro make no real allowance for different cyclical phases in estimating the multiplier.

PS Scott Sumner also comes away from reading Barro’s 2011 piece perplexed by what Barro is really saying and why, and does an excellent job of trying in vain to find some coherent conceptual framework within which to understand Barro. The problem is that there is none. That’s why Barro deserves the rough treatment he got from Krugman.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,391 other followers

Follow Uneasy Money on WordPress.com