Posts Tagged 'Simon Wren-Lewis'

Forget the Monetary Base and Just Pay Attention to the Price Level

Kudos to David Beckworth for eliciting a welcome concession or clarification from Paul Krugman that monetary policy is not necessarily ineffectual at the zero lower bound. The clarification is welcome because Krugman and Simon Wren Lewis seemed to be making a big deal about insisting that monetary policy at the zero lower bound is useless if it affects only the current, but not the future, money supply, and touting the discovery as if it were a point that was not already well understood.

Now it’s true that Krugman is entitled to take credit for having come up with an elegant way of showing the difference between a permanent and a temporary increase in the monetary base, but it’s a point that, WADR, was understood even before Krugman. See, for example, the discussion in chapter 5 of Jack Hirshleifer’s textbook on capital theory (published in 1970), Investment, Interest and Capital, showing that the Fisher equation follows straightforwardly in an intertemporal equilibrium model, so that the nominal interest rate can be decomposed into a real component and an expected-inflation component. If holding money is costless, then the nominal rate of interest cannot be negative, and expected deflation cannot exceed the equilibrium real rate of interest. This implies that, at the zero lower bound, the current price level cannot be raised without raising the future price level proportionately. That is all Krugman was saying in asserting that monetary policy is ineffective at the zero lower bound, even though he couched the analysis in terms of the current and future money supplies rather than in terms of the current and future price levels. But the entire argument is implicit in the Fisher equation. And contrary to Krugman, the IS-LM model (with which I am certainly willing to coexist) offers no unique insight into this proposition; it would be remarkable if it did, because the IS-LM model in essence is a static model that has to be re-engineered to be used in an intertemporal setting.

Here is how Hirshleifer concludes his discussion:

The simple two-period model of choice between dated consumptive goods and dated real liquidities has been shown to be sufficiently comprehensive as to display both the quantity theorists’ and the Keynesian theorists’ predicted results consequent upon “changes in the money supply.” The seeming contradiction is resolved by noting that one result or the other follows, or possibly some mixture of the two, depending upon the precise meaning of the phrase “changes in the quantity of money.” More exactly, the result follows from the assumption made about changes in the time-distributed endowments of money and consumption goods.  pp. 150-51

Another passage from Hirshleifer is also worth quoting:

Imagine a financial “panic.” Current money is very scarce relative to future money – and so monetary interest rates are very high. The monetary authorities might then provide an increment [to the money stock] while announcing that an equal aggregate amount of money would be retired at some date thereafter. Such a change making current money relatively more plentiful (or less scarce) than before in comparison with future money, would clearly tend to reduce the monetary rate of interest. (p. 149)

In this passage Hirshleifer accurately describes the objective of Fed policy since the crisis: provide as much liquidity as needed to prevent a panic, but without even trying to generate a substantial increase in aggregate demand by increasing inflation or expected inflation. The refusal to increase aggregate demand was implicit in the Fed’s refusal to increase its inflation target.

However, I do want to make explicit a point of disagreement between me and Hirshleifer, Krugman and Beckworth. The point is more conceptual than analytical, by which I mean that although the analysis of monetary policy can formally be carried out either in terms of current and future money supplies, as Hirshleifer, Krugman and Beckworth do, or in terms of price levels, as I prefer to do so in terms of price levels. For one thing, reasoning in terms of price levels immediately puts you in the framework of the Fisher equation, while thinking in terms of current and future money supplies puts you in the framework of the quantity theory, which I always prefer to avoid.

The problem with the quantity theory framework is that it assumes that quantity of money is a policy variable over which a monetary authority can exercise effective control, a mistake — imprinted in our economic intuition by two or three centuries of quantity-theorizing, regrettably reinforced in the second-half of the twentieth century by the preposterous theoretical detour of monomaniacal Friedmanian Monetarism, as if there were no such thing as an identification problem. Thus, to analyze monetary policy by doing thought experiments that change the quantity of money is likely to mislead or confuse.

I can’t think of an effective monetary policy that was ever implemented by targeting a monetary aggregate. The optimal time path of a monetary aggregate can never be specified in advance, so that trying to target any monetary aggregate will inevitably fail, thereby undermining the credibility of the monetary authority. Effective monetary policies have instead tried to target some nominal price while allowing monetary aggregates to adjust automatically given that price. Sometimes the price being targeted has been the conversion price of money into a real asset, as was the case under the gold standard, or an exchange rate between one currency and another, as the Swiss National Bank is now doing with the franc/euro exchange rate. Monetary policies aimed at stabilizing a single price are easy to implement and can therefore be highly credible, but they are vulnerable to sudden changes with highly deflationary or inflationary implications. Nineteenth century bimetallism was an attempt to avoid or at least mitigate such risks. We now prefer inflation targeting, but we have learned (or at least we should have) from the Fed’s focus on inflation in 2008 that inflation targeting can also lead to disastrous consequences.

I emphasize the distinction between targeting monetary aggregates and targeting the price level, because David Beckworth in his post is so focused on showing 1) that the expansion of the Fed’s balance sheet under QE has been temoprary and 2) that to have been effective in raising aggregate demand at the zero lower bound, the increase in the monetary base needed to be permanent. And I say: both of the facts cited by David are implied by the fact that the Fed did not raise its inflation target or, preferably, replace its inflation target with a sufficiently high price-level target. With a higher inflation target or a suitable price-level target, the monetary base would have taken care of itself.

PS If your name is Scott Sumner, you have my permission to insert “NGDP” wherever “price level” appears in this post.

Advertisements

Explaining the Hegemony of New Classical Economics

Simon Wren-Lewis, Robert Waldmann, and Paul Krugman have all recently devoted additional space to explaining – ruefully, for the most part – how it came about that New Classical Economics took over mainstream macroeconomics just about half a century after the Keynesian Revolution. And Mark Thoma got them all started by a complaint about the sorry state of modern macroeconomics and its failure to prevent or to cure the Little Depression.

Wren-Lewis believes that the main problem with modern macro is too much of a good thing, the good thing being microfoundations. Those microfoundations, in Wren-Lewis’s rendering, filled certain gaps in the ad hoc Keynesian expenditure functions. Although the gaps were not as serious as the New Classical School believed, adding an explicit model of intertemporal expenditure plans derived from optimization conditions and rational expectations, was, in Wren-Lewis’s estimation, an improvement on the old Keynesian theory. The improvements could have been easily assimilated into the old Keynesian theory, but weren’t because New Classicals wanted to junk, not improve, the received Keynesian theory.

Wren-Lewis believes that it is actually possible for the progeny of Keynes and the progeny of Fisher to coexist harmoniously, and despite his discomfort with the anti-Keynesian bias of modern macroeconomics, he views the current macroeconomic research program as progressive. By progressive, I interpret him to mean that macroeconomics is still generating new theoretical problems to investigate, and that attempts to solve those problems are producing a stream of interesting and useful publications – interesting and useful, that is, to other economists doing macroeconomic research. Whether the problems and their solutions are useful to anyone else is perhaps not quite so clear. But even if interest in modern macroeconomics is largely confined to practitioners of modern macroeconomics, that fact alone would not conclusively show that the research program in which they are engaged is not progressive, the progressiveness of the research program requiring no more than a sufficient number of self-selecting econ grad students, and a willingness of university departments and sources of research funding to cater to the idiosyncratic tastes of modern macroeconomists.

Robert Waldmann, unsurprisingly, takes a rather less charitable view of modern macroeconomics, focusing on its failure to discover any new, previously unknown, empirical facts about macroeconomic, or to better explain known facts than do alternative models, e.g., by more accurately predicting observed macro time-series data. By that, admittedly, demanding criterion, Waldmann finds nothing progressive in the modern macroeconomics research program.

Paul Krugman weighed in by emphasizing not only the ideological agenda behind the New Classical Revolution, but the self-interest of those involved:

Well, while the explicit message of such manifestos is intellectual – this is the only valid way to do macroeconomics – there’s also an implicit message: from now on, only my students and disciples will get jobs at good schools and publish in major journals/ And that, to an important extent, is exactly what happened; Ken Rogoff wrote about the “scars of not being able to publish stick-price papers during the years of new classical repression.” As time went on and members of the clique made up an ever-growing share of senior faculty and journal editors, the clique’s dominance became self-perpetuating – and impervious to intellectual failure.

I don’t disagree that there has been intellectual repression, and that this has made professional advancement difficult for those who don’t subscribe to the reigning macroeconomic orthodoxy, but I think that the story is more complicated than Krugman suggests. The reason I say that is because I cannot believe that the top-ranking economics departments at schools like MIT, Harvard, UC Berkeley, Princeton, and Penn, and other supposed bastions of saltwater thinking have bought into the underlying New Classical ideology. Nevertheless, microfounded DSGE models have become de rigueur for any serious academic macroeconomic theorizing, not only in the Journal of Political Economy (Chicago), but in the Quarterly Journal of Economics (Harvard), the Review of Economics and Statistics (MIT), and the American Economic Review. New Keynesians, like Simon Wren-Lewis, have made their peace with the new order, and old Keynesians have been relegated to the periphery, unable to publish in the journals that matter without observing the generally accepted (even by those who don’t subscribe to New Classical ideology) conventions of proper macroeconomic discourse.

So I don’t think that Krugman’s ideology plus self-interest story fully explains how the New Classical hegemony was achieved. What I think is missing from his story is the spurious methodological requirement of microfoundations foisted on macroeconomists in the course of the 1970s. I have discussed microfoundations in a number of earlier posts (here, here, here, here, and here) so I will try, possibly in vain, not to repeat myself too much.

The importance and desirability of microfoundations were never questioned. What, after all, was the neoclassical synthesis, if not an attempt, partly successful and partly unsuccessful, to integrate monetary theory with value theory, or macroeconomics with microeconomics? But in the early 1970s the focus of attempts, notably in the 1970 Phelps volume, to provide microfoundations changed from embedding the Keynesian system in a general-equilibrium framework, as Patinkin had done, to providing an explicit microeconomic rationale for the Keynesian idea that the labor market could not be cleared via wage adjustments.

In chapter 19 of the General Theory, Keynes struggled to come up with a convincing general explanation for the failure of nominal-wage reductions to clear the labor market. Instead, he offered an assortment of seemingly ad hoc arguments about why nominal-wage adjustments would not succeed in reducing unemployment, enabling all workers willing to work at the prevailing wage to find employment at that wage. This forced Keynesians into the awkward position of relying on an argument — wages tend to be sticky, especially in the downward direction — that was not really different from one used by the “Classical Economists” excoriated by Keynes to explain high unemployment: that rigidities in the price system – often politically imposed rigidities – prevented wage and price adjustments from equilibrating demand with supply in the textbook fashion.

These early attempts at providing microfoundations were largely exercises in applied price theory, explaining why self-interested behavior by rational workers and employers lacking perfect information about all potential jobs and all potential workers would not result in immediate price adjustments that would enable all workers to find employment at a uniform market-clearing wage. Although these largely search-theoretic models led to a more sophisticated and nuanced understanding of labor-market dynamics than economists had previously had, the models ultimately did not provide a fully satisfactory account of cyclical unemployment. But the goal of microfoundations was to explain a certain set of phenomena in the labor market that had not been seriously investigated, in the hope that price and wage stickiness could be analyzed as an economic phenomenon rather than being arbitrarily introduced into models as an ad hoc, albeit seemingly plausible, assumption.

But instead of pursuing microfoundations as an explanatory strategy, the New Classicals chose to impose it as a methodological prerequisite. A macroeconomic model was inadmissible unless it could be explicitly and formally derived from the optimizing choices of fully rational agents. Instead of trying to enrich and potentially transform the Keynesian model with a deeper analysis and understanding of the incentives and constraints under which workers and employers make decisions, the New Classicals used microfoundations as a methodological tool by which to delegitimize Keynesian models, those models being insufficiently or improperly microfounded. Instead of using microfoundations as a method by which to make macroeconomic models conform more closely to the imperfect and limited informational resources available to actual employers deciding to hire or fire employees, and actual workers deciding to accept or reject employment opportunities, the New Classicals chose to use microfoundations as a methodological justification for the extreme unrealism of the rational-expectations assumption, portraying it as nothing more than the consistent application of the rationality postulate underlying standard neoclassical price theory.

For the New Classicals, microfoundations became a reductionist crusade. There is only one kind of economics, and it is not macroeconomics. Even the idea that there could be a conceptual distinction between micro and macroeconomics was unacceptable to Robert Lucas, just as the idea that there is, or could be, a mind not reducible to the brain is unacceptable to some deranged neuroscientists. No science, not even chemistry, has been reduced to physics. Were it ever to be accomplished, the reduction of chemistry to physics would be a great scientific achievement. Some parts of chemistry have been reduced to physics, which is a good thing, especially when doing so actually enhances our understanding of the chemical process and results in an improved, or more exact, restatement of the relevant chemical laws. But it would be absurd and preposterous simply to reject, on supposed methodological principle, those parts of chemistry that have not been reduced to physics. And how much more absurd would it be to reject higher-level sciences, like biology and ecology, for no other reason than that they have not been reduced to physics.

But reductionism is what modern macroeconomics, under the New Classical hegemony, insists on. No exceptions allowed; don’t even ask. Meekly and unreflectively, modern macroeconomics has succumbed to the absurd and arrogant methodological authoritarianism of the New Classical Revolution. What an embarrassment.

UPDATE (11:43 AM EDST): I made some minor editorial revisions to eliminate some grammatical errors and misplaced or superfluous words.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,351 other followers

Follow Uneasy Money on WordPress.com
Advertisements