Archive Page 43

Another Nail in the Money-Multiplier Coffin

It’s been awhile since my last rant about the money multiplier. I am not really feeling up to another whack at the piñata just yet, but via Not Trampis, I saw this post on the myth of the money multiplier by the estimable Barkley Rosser. Rosser discusses a recent unpublished paper by two Fed economists, Seth Carpenter and Selva Demiralp, entitled “Money, Reserves, and the Transmission of Monetary Policy: Does the Money Multiplier Exist?”

Rosser concludes his post as follows:

That Fed control over the money supply has become a phantom has been quite clear since the Minsky moment in 2008, with the Fed massively expanding its balance sheet without much resulting increase in measured money supply.  This of course has made a hash of all the people ranting about the Fed “printing money,” which presumably will lead to hyperinflation any minute (eeek!).  But the deeper story that some of us were unaware of is that apparently this disjuncture happened a long time ago.  Even so, one of our number pointed out that official Fed literature and even many Fed employees still sell the reserve base story tied to a money multiplier to the public, just as one continues to find it in the textbooks,  But apparently most of them know better, and the money multiplier became a myth a long time ago.

Here’s the abstract of the Carpenter and Demiralp paper.

With the use of nontraditional policy tools, the level of reserve balances has risen significantly in the United States since 2007. Before the financial crisis, reserve balances were roughly $20 billion whereas the level has risen well past $1 trillion. The effect of reserve balances in simple macroeconomic models often comes through the money multiplier, affecting the money supply and the amount of lending in the economy. Most models currently used for macroeconomic policy analysis, however, either exclude money or model money demand as entirely endogenous, thus precluding any causal role for money. Nevertheless, some academic research and many textbooks continue to use the money multiplier concept in discussions of money. We explore the institutional structure of the transmissions mechanism beginning with open market operations through to money and loans. We then undertake empirical analysis of the relationship among reserve balances, money, and bank lending. We use aggregate as well as bank-level data in a VAR framework and document that the mechanism does not work through the standard multiplier model or the bank lending channel. In particular, if the level of reserve balances is expected to have an impact on the economy, it seems unlikely that the standard multiplier analysis will explain the effect.

And here’s my take from 25 years ago in my book Free Banking and Monetary Reform (p. 173)

The conventional models break down the money supply into high-power money [the monetary base] and the money multiplier. The money multiplier summarizes the supposedly stable numerical relationship between high-powered money and the total stock of money. Thus an injection of reserves, by increasing high-powered money, is supposed to have a determinate multiplier effect on the stock of money. But in Tobin’s analysis, the implications of an injection of reserves were ambiguous. The result depended on how the added reserves affected interest rates and, in turn, the costs and revenues from creating deposits. It was only a legal prohibition of paying interest on deposits, which kept marginal revenue above marginal cost, that created an incentive for banks to expand whenever they acquired additional reserves.

When regulation Q was abolished, it meant lights out for the money-multiplier.

Two Reviews: One Old, One New

Recently I have been working on a review of a recently published (2011) volume, The Empire of Credit: The Financial Revolution in Britain, Ireland, and America, 1688-1815 for The Journal of the History of Economic Thought. I found the volume interesting in a number of ways, but especially because it seemed to lend support to some of my ideas on why the state has historically played such a large role in the supply of money. When I first started to study economics, I was taught that money is a natural monopoly, the value of money being inevitably forced down by free competition to the value of the paper on which it was written. I believe that Milton Friedman used to make this argument (though, if I am not mistaken, he eventually stopped), and I think the argument can be found in writing in his Program for Monetary Stability, but my memory may be playing a trick on me.

Eventually I learned, first from Ben Klein and later from Earl Thompson, that the naïve natural-monopoly argument is a fallacy, because it presumes that all moneys are indistinguishable. However, Earl Thompson had a very different argument, explaining that the government monopoly over money is an efficient form of emergency taxation when a country is under military threat, so that raising funds through taxation would be too cumbersome and time-consuming to rely on when that state is faced with an existential threat. Taking this idea, I wrote a paper “An Evolutionary Theory of the State Monopoly over Money,” eventually published (1998) in a volume Money and the Nation State. The second chapter of my book Free Banking and Monetary Reform was largely based on this paper. Earl Thompson worked out the analytics of the defense argument for a government monopoly over money in a number of places. (Here’s one.)

And here are the first two paragraphs from my review (which I have posted on SSRN):

The diverse studies collected in The Empire of Credit , ranging over both monetary and financial history and the history of monetary theory, share a common theme: the interaction between the fiscal requirements of national defense and the rapid evolution of monetary and financial institutions from the late seventeenth century to the early nineteenth century, the period in which Great Britain unexpectedly displaced France as the chief European military power, while gaining a far-flung intercontinental empire, only modestly diminished by the loss of thirteen American colonies in 1783. What enabled that interaction to produce such startling results were the economies achieved by substituting bank-supplied money (banknotes and increasingly bank deposits) for gold and silver. The world leader in the creation of these new instruments, Britain reaped the benefits of efficiencies in market transactions while simultaneously creating a revenue source (through the establishment of the Bank of England) that could be tapped by the Crown and Parliament to fund the British military, thereby enabling conquests against rivals (especially France) that lagged behind Britain in the development of flexible monetary institutions.

Though flexible, British monetary arrangements were based on a commitment to a fixed value of sterling in terms of gold, a commitment which avoided both the disastrous consequences of John Law’s brilliant, but ill-fated, monetary schemes in France, and the resulting reaction against banking that may account for the subsequent slow development of French banking and finance. However, at a crucial moment, the British were willing and able to cut the pound lose from its link to gold, providing themselves with the wherewithal to prevail in the struggle against Napoleon, thereby ensuring British supremacy for another century. (Read more.) [Update 2:37 PM EST: the paper is now available to be downloaded.]

In writing this review, I recalled a review that I wrote in 2000 for EH.net of a volume of essays (Essays in History: Financial, Economic, and Personal) by the eminent economic historian Charles Kindleberger, author of the classic Manias, Panics and Crashes. Although I greatly admired Kindleberger for his scholarship and wit, I disagreed with a lot of his specific arguments and policy recommendations, and I tried to give expression to both my admiration of Kindleberger and my disagreement with him in my review (also just posted on SSRN). Here are the first two paragraphs of that essay.

Charles P. Kindleberger, perhaps the leading financial historian of our time, has also been a prolific, entertaining, and insightful commentator and essayist on economics and economists. If one were to use Isaiah Berlin’s celebrated dichotomy between hedgehogs that know one big thing and foxes that know many little things, Kindleberger would certainly appear at or near the top of the list of economist foxes. Although Kindleberger himself never invokes Berlin’s distinction between hedgehogs and foxes, many of Kindleberger’s observations on the differences between economic theory and economic history, the difficulty of training good economic historians, and his critical assessment of grand theories of economic history such as Kondratieff long cycles, are in perfect harmony with Berlin.

So it is hard to imagine a collection of essays by Kindleberger that did not contain much that those interested in economics, finance, history, and policy — all considered from a humane and cosmopolitan perspective — would find worth reading. For those with a pronounced analytical bent (who are perhaps more inclined to prefer the output of a hedgehog than of a fox), this collection may seem a somewhat thin gruel. And some of the historical material in the first section will appear rather dry to all but the most dedicated numismatists. Nevertheless, there are enough flashes of insight, wit (my favorite is his aside that during talks on financial crises he elicits a nervous laugh by saying that nothing disturbs a person’s judgment so much as to see a friend get rich), and wisdom as well as personal reminiscences from a long and varied career (including an especially moving memoir of his relationship with his student and colleague Carlos F. Diaz-Alejandro) to repay readers of this volume. Unfortunately the volume is marred somewhat by an inordinate number of editorial lapses and mistaken attributions or misidentifications such as attributing a cutting remark about Paganini’s virtuosity to Samuel Johnson (who died when the maestro was all of two years old). (Read more) [Update 2:37 PM EST: the paper is now available to be downloaded.]

John Taylor, Post-Modern Monetary Theorist

In the beginning, there was Keynesian economics; then came Post-Keynesian economics.  After Post-Keynesian economics, came Modern Monetary Theory.  And now it seems, John Taylor has discovered Post-Modern Monetary Theory.

What, you may be asking yourself, is Post-Modern Monetary Theory all about? Great question!  In a recent post, Scott Sumner tried to deconstruct Taylor’s position, and found himself unable to determine just what it is that Taylor wants in the way of monetary policy.  How post-modern can you get?

Taylor is annoyed that the Fed is keeping interest rates too low by a policy of forward guidance, i.e., promising to keep short-term interest rates close to zero for an extended period while buying Treasuries to support that policy.

And yet—unlike its actions taken during the panic—the Fed’s policies have been accompanied by disappointing outcomes. While the Fed points to external causes, it ignores the possibility that its own policy has been a factor.

At this point, the alert reader is surely anticipating an explanation of why forward guidance aimed at reducing the entire term structure of interest rates, thereby increasing aggregate demand, has failed to do so, notwithstanding the teachings of both Keynesian and non-Keynesian monetary theory.  Here is Taylor’s answer:

At the very least, the policy creates a great deal of uncertainty. People recognize that the Fed will eventually have to reverse course. When the economy begins to heat up, the Fed will have to sell the assets it has been purchasing to prevent inflation.

Taylor seems to be suggesting that, despite low interest rates, the public is not willing to spend because of increased uncertainty.  But why wasn’t the public spending more in the first place, before all that nasty forward guidance?  Could it possibly have had something to do with business pessimism about demand and household pessimism about employment?  If the problem stems from an underlying state of pessimistic expectations about the future, the question arises whether Taylor considers such pessimism to be an element of, or related to, uncertainty?

I don’t know the answer, but Taylor posits that the public is assuming that the Fed’s policy will have to be reversed at some point. Why? Because the economy will “heat up.” As an economic term, the verb “to heat up” is pretty vague, but it seems to connote, at the very least, increased spending and employment. Which raises a further question: given a state of pessimistic expectations about future demand and employment, does a policy that, by assumption, increases the likelihood of additional spending and employment create uncertainty or diminish it?

It turns out that Taylor has other arguments for the ineffectiveness of forward guidance.  We can safely ignore his two throw-away arguments about on-again off-again asset purchases, and the tendency of other central banks to follow Fed policy.  A more interesting reason is provided when Taylor compares Fed policy to a regulatory price ceiling.

[I]f investors are told by the Fed that the short-term rate is going to be close to zero in the future, then they will bid down the yield on the long-term bond. The forward guidance keeps the long-term rate low and tends to prevent it from rising. Effectively the Fed is imposing an interest-rate ceiling on the longer-term market by saying it will keep the short rate unusually low.

The perverse effect comes when this ceiling is below what would be the equilibrium between borrowers and lenders who normally participate in that market. While borrowers might like a near-zero rate, there is little incentive for lenders to extend credit at that rate.

This is much like the effect of a price ceiling in a rental market where landlords reduce the supply of rental housing. Here lenders supply less credit at the lower rate. The decline in credit availability reduces aggregate demand, which tends to increase unemployment, a classic unintended consequence of the policy.

When economists talk about a price ceiling what they usually mean is that there is some legal prohibition on transactions between willing parties at a price above a specified legal maximum price.  If the prohibition is enforced, as are, for example, rent ceilings in New York City, some people trying to rent apartments will be unable to do so, even though they are willing to pay as much, or more, than others are paying for comparable apartments.  The only rates that the Fed is targeting, directly or indirectly, are those on US Treasuries at various maturities.  All other interest rates in the economy are what they are because, given the overall state of expectations, transactors are voluntarily agreeing to the terms reflected in those rates.  For any given class of financial instruments, everyone willing to purchase or sell those instruments at the going rate is able to do so.  For Professor Taylor to analogize this state of affairs to a price ceiling is not only novel, it  is thoroughly post-modern.

Nunes and Cole Write the E-Book on Market Monetarism

This post is slightly late in coming, but I want to give my fellow bloggers and valued commenters on this blog, Marcus Nunes and Benjamin Cole a shout out and my warmest congratulations on the publication, last week, of their new e-book Market Monetarism: Roadmap to Economic Prosperity.

I have not yet read the entire book, but I did read the introductory chapter available on Amazon, and I was impressed, but not surprised, by their wide knowledge and understanding of monetary economics as well as their clear, direct and engaging style. I was also pleased to find that they gave due recognition to Gustav Cassel, Ralph Hawtrey, and James Meade for their important contributions. Nor do I hold it against them that they quoted from my paper on Hawtrey and Cassel, though they did forget to mention the name of my co-author, Ron Batchelder.

Way to go, guys.

Charles Goodhart on Nominal GDP Targeting

Charles Goodhart might just be the best all-around monetary economist in the world, having made impressive contributions to both monetary theory and the history of monetary theory, to monetary history, and the history of monetary institutions, especially of central banking, and to the theory and, in his capacity as chief economist of the Bank of England, practice of monetary policy. So whenever Goodhart offers his views on monetary policy, it is a good idea to pay close attention to what he says. But if there is anything to be learned from the history of economics (and I daresay the history of any scientific discipline), it is that nobody ever gets it right all the time. It’s nice to have a reputation, but sadly reputation provides no protection from error.

In response to the recent buzz about targeting nominal GDP, Goodhart, emeritus professor at the London School of Economics and an adviser to Morgan Stanley along with two Morgan Stanley economists, Jonathan Ashworth and Melanie Baker, just published a critique of a recent speech by Mark Carney, Governor-elect of the Bank of England, in which Carney seemed to endorse targeting the level of nominal GDP (hereinafter NGDPLT). (See also Marcus Nunes’s excellent post about Goodhart et al.) Goodhart et al. have two basic complaints about NGDPLT. The first one is that our choice of an initial target level (i.e., do we think that current NGDP is now at its target or away from its target and if so by how much) and of the prescribed growth in the target level over time would itself create destabilizing uncertainty in the process of changing to an NGDPLT monetary regime. The key distinction between a level target and a growth-rate target is that the former requires a subsequent compensatory adjustment for any deviation from the target while the latter requires no such adjustment for a deviation from the target. Because deviations will occur under any targeting regime, Goodhart et al. worry that the compensatory adjustments required by NGDPLT could trigger destabilizing gyrations in NGDP growth, especially if expectations, as they think likely, became unanchored.

This concern seems easily enough handled if the monetary authority is given say a 1-1.5% band around its actual target within which to operate. Inevitable variations around the target would not automatically require an immediate rapid compensatory adjustment. As long as the monetary authority remained tolerably close to its target, it would not be compelled to make a sharp policy adjustment. A good driver does not always drive down the middle of his side of the road, the driver uses all the space available to avoid having to make an abrupt changes in the direction in which the car is headed. The same principle would govern the decisions of a skillful monetary authority.

Another concern of Goodhart et al. is that the choice of the target growth rate of NGDP depends on how much real growth,we think the economy is capable of. If real growth of 3% a year is possible, then the corresponding NGDP level target depends on how much inflation policy makers believe necessary to achieve that real GDP growth rate. If the “correct” rate of inflation is about 2%, then the targeted level of NGDP should grow at 5% a year. But Goodhart et al. are worried that achievable growth may be declining. If so, NGDPLT at 5% a year will imply more than 2% annual inflation.

Effectively, any overestimation of the sustainable real rate of growth, and such overestimation is all too likely, could force an MPC [monetary policy committee], subject to a level nominal GDP target, to soon have to aim for a significantly higher rate of inflation. Is that really what is now wanted? Bring back the stagflation of the 1970s; all is forgiven?

With all due respect, I find this concern greatly overblown. Even if the expectation of 3% real growth is wildly optimistic, say 2% too high, a 5% NGDP growth path would imply only 4% inflation. That might be too high a rate for Goodhart’s taste, or mine for that matter, but it would be a far cry from the 1970s, when inflation was often in the double-digits. Paul Volcker achieved legendary status in the annals of central banking by bringing the US rate of inflation down to 3.5 to 4%, so one needs to maintain some sense of proportion in these discussions.

Finally, Goodhart et al. invoke the Phillips Curve.

[A]n NGDP target would appear to run counter to the previously accepted tenets of monetary theory. Perhaps the main claim of monetary economics, as persistently argued by Friedman, and the main reason for having an independent Central Bank, is that over the medium and longer term monetary forces influence only monetary variables. Other real (e.g. supply-side) factors determine growth; the long-run Phillips curve is vertical. Do those advocating a nominal GDP target now deny that? Do they really believe that faster inflation now will generate a faster, sustainable, medium- and longer-term growth rate?

While it is certainly undeniable that Friedman showed, as, in truth, many others had before him, that, for an economy in approximate full-employment equilibrium, increased inflation cannot permanently reduce unemployment, it is far from obvious (to indulge in bit of British understatement) that we are now in a state of full-employment equilibrium. If the economy is not now in full-employment equilibrium, the idea that monetary-neutrality propositions about money influencing only monetary, but not real, variables in the medium and longer term are of no relevance to policy. Those advocating a nominal GDP target need not deny that the long-run Phillips Curve is vertical, though, as I have argued previously (here, here, and here) the proposition that the long-run Phillips Curve is vertical is very far from being the natural law that Goodhart and many others seem to regard it as. And if Goodhart et al. believe that we in fact are in a state of full-employment equilibrium, then they ought to say so forthrightly, and they ought to make an argument to justify that far from obvious characterization of the current state of affairs.

Having said all that, I do have some sympathy with the following point made by Goodhart et al.

Given our uncertainty about sustainable growth, an NGDP target also has the obvious disadvantage that future certainty about inflation becomes much less than under an inflation (or price level) target. In order to estimate medium- and longer-term inflation rates, one has first to take some view about the likely sustainable trends in future real output. The latter is very difficult to do at the best of times, and the present is not the best of times. So shifting from an inflation to a nominal GDP growth target is likely to have the effect of raising uncertainty about future inflation and weakening the anchoring effect on expectations of the inflation target.

That is one reason why in my book Free Banking and Monetary Reform, I advocated Earl Thompson’s proposal for a labor standard aimed at stabilizing average wages (or, more precisely, average expected wages). But if you stabilize wages, and productivity is falling, then prices must rise. That’s just a matter of arithmetic. But there is no reason why the macroeconomically optimal rate of inflation should be invariant with respect to the rate of technological progress.

HT:  Bill Woolsey

The Social Cost of Finance

Noah Smith has a great post that bears on the topic that I have been discussing of late (here and here): whether the growth of the US financial sector over the past three decades had anything to do with the decline in the real rate of interest that seems to have occurred over the same period. I have been suggesting that there may be reason to believe that the growth in the financial sector (from about 5% of GDP in 1980 to 8% in 2007) has reduced the productivity of the rest of the economy, because a not insubstantial part of the earnings of the financial sector has been extracted from relatively unsophisticated, informationally disadvantaged, traders and customers. Much of what financial firms do is aimed at obtaining an information advantage from which profit can be extracted, just as athletes devote resources to gaining a competitive advantage. The resources devoted to gaining informational advantage are mostly wasted, being used to transfer, not create, wealth. This seems to be true as a matter of theory; what is less clear is whether enough resources have been wasted to cause a non-negligible deterioration in economic performance.

Noah underscores the paucity of our knowledge by referring to two papers, one by Robin Greenwood and David Scharfstein (recently published in the Journal of Economic Perspectives) and the other, a response by John Cochrane posted on his blog (see here for the PDF). The Greewood and Scharfstein paper provides theoretical arguments and evidence that tend to support the proposition that the US financial sector is too large. Here is how they sum up their findings.

First, a large part of the growth of finance is in asset management, which has brought many benefits including, most notably, increased diversification and household participation in the stock market. This has likely lowered required rates of return on risky securities, increased valuations, and lowered the cost of capital to corporations. The biggest beneficiaries were likely young firms, which stand to gain the most when discount rates fall. On the other hand, the enormous growth of asset management after 1997 was driven by high fee alternative investments, with little direct evidence of much social benefit, and potentially large distortions in the allocation of talent. On net, society is likely better off because of active asset management but, on the margin, society would be better off if the cost of asset management could be reduced.

Second, changes in the process of credit delivery facilitated the expansion of household credit, mainly in residential mortgage credit. This led to higher fee income to the financial sector. While there may be benefits of expanding access to mortgage credit and lowering its cost, we point out that the U.S. tax code already biases households to overinvest in residential real estate. Moreover, the shadow banking system that facilitated this expansion made the financial system more fragile.

In his response, Cochrane offers a number of reasons why Greenwood and Scharfstein are understating the benefits generated by active asset management. Here is a passage from Cochrane’s paper (quoted also by Noah) that I would like to focus on.

I conclude that information trading of this sort sits at the conflict of two externalities / public goods. On the one hand, as French points out, “price impact” means that traders are not able to appropriate the full value of the information they bring, so there can be too few resources devoted to information production (and digestion, which strikes me as far more important). On the other hand, as Greenwood and Scharfstein point out, information is a non-rival good, and its exploitation in financial markets is a tournament (first to use it gets all the benefit) so the theorem that profits you make equal the social benefit of its production is false. It is indeed a waste of resources to bring information to the market a few minutes early, when that information will be revealed for free a few minutes later. Whether we have “too much” trading, too many resources devoted to finding information that somebody already has in will be revealed in a few minutes, or “too little” trading, markets where prices go for long times not reflecting important information, as many argued during the financial crisis, seems like a topic which neither theory nor empirical work has answered with any sort of clarity.

Cochrane’s characterization of information trading as a public good is not wrong, inasmuch as we all benefit from the existence of markets for goods and assets, even those of us that don’t participate routinely (or ever) in those markets, first because the existence of those markets provides us with opportunities to trade that may, at some unknown future time, become very valuable to us, and second, because the existence of markets contributes to the efficient utilization of resources, thereby increasing the total value of output. Because the existence of markets is a kind of public good, it may be true that even more market trading than now occurs would be socially beneficial. Suppose that every trade involves a transaction cost of 5 cents, and that the transactions cost prevents at least one trade from taking place, because the expected gain to the traders from that trade would only be 4 cents. But since that unconsummated trade would also confer a benefit on third parties, by improving the allocation of resources ever so slightly, causing total output to rise by, say, 3 cents, it would be worth it to the rest of us to subsidize parties to that unconsummated trade by rebating some part of the transactions cost associated with that trade.

But here’s my problem with Cochrane’s argument. Let us imagine that there is some unique social optimum, or at least a defined set of Pareto-optimal allocations, which we are trying to attain, or to come as close as possible to. The existence of functioning markets certainly helps us come closer to the set of Pareto optimal allocations than if markets did not exist. Cochrane is suggesting that, by devoting more resources to the production of information (which in a basically free-market, private-property economy involves the creation private informational advantages) we get more trading, and with more trading we come closer to the set of Pareto-optimal allocations than with less trading. However, it seems plausible that the production of additional information and the increase in trading activity is subject to diminishing returns in the sense that eventually obtaining additional information and engaging in additional trades reduces the distance between the actual allocation and the set of Pareto-optimal allocations by successively smaller amounts. Otherwise, we would in fact reach Pareto optimality. So, as we devote more and more resources to producing information and to trading, the amount of public-good co-generation must diminish. But this means that the negative externality associated with using increasing amounts of resources to produce private informational advantages must at some point — and probably fairly quickly — overwhelm the public-good co-generated by increased trading.

So although Cochrane has a theoretical point that, without more evidence than we have now, we can’t necessarily be sure that the increase in resources devoted to finance has been associated with a net social loss, I am still inclined to suspect doubt strongly that, at the margin, there are net positive social benefits from adding resources to finance. In this regard, the paper (cited by Greenwood and Scharfstein) “The Allocation of Talent: Implications for Growth” by Kevin Murphy, Andrei Shleifer and Robert Vishny.

Falling Real Interest Rates, Winner-Take-All Markets, and Lance Armstrong

In my previous post, I suggested that real interest rates are largely determined by expectations, entrepreneurial expectations of profit and household expectations of future income. Increased entrepreneurial optimism implies that entrepreneurs are revising upwards the anticipated net cash flows from the current stock of capital assets, in other words an increasing demand for capital assets. Because the stock of capital assets doesn’t change much in the short run, an increased demand for those assets tends, in the short run, to raise real interest rates as people switch from fixed income assets (bonds) into the real assets associated with increased expected net cash flows. Increased optimism by households about their future income prospects implies that their demand for long-lived assets, real or financial, tends to decline as household devote an increased share of current income to present consumption and less to saving for future consumption, because an increase in future income reduces the amount of current savings needed to achieve a given level of future consumption. The more optimistic I am about my future income, the less I will save in the present. If I win the lottery, I will start spending even before I collect my winnings. The reduced household demand for long-lived assets with which to provide for future consumption reduces the value of such assets, implying, for given expectations of their future yields, an increased real interest rate.

This is the appropriate neoclassical (Fisherian) framework within which to think about the determination of real interest rates. The Fisherian theory may not be right, but I don’t think that we have another theory of comparable analytical power and elegance. Other theories are just ad hoc, and lack the aesthetic appeal of the Fisherian theory. Alas, the world is a messy place, and we have no guarantee that the elegant theory will always win out. Truth and beauty need not the same. (Sigh!)

Commenting on my previous post, Joshua Wojnilower characterized my explanation as “a combination of a Keynesian-demand side story in the first paragraph and an Austrian/Lachmann subjective expectations view in the second section.” I agree that Keynes emphasized the importance of changes in the state of entrepreneurial expectations in causing shifts in the marginal efficiency of capital, and that Austrian theory is notable for its single-minded emphasis on the subjectivity of expectations. But these ideas are encompassed by the Fisherian neoclassical paradigm, entrepreneurial expectations about profits determining the relevant slope of the production possibility curve embodying opportunities for the current and future production of consumption goods on the one hand, and household expectations about future income determining the slope of household indifference curves reflecting their willingness to exchange current for future consumption. So it’s all in Fisher.

Thus, as I observed, falling real interest rates could be explained, under the Fisherian theory, by deteriorating entrepreneurial expectations, or by worsening household expectations about future income (employment). In my previous post, I suggested that, at least since the 2007-09 downturn, entrepreneurial profit expectations have been declining along with the income (employment) expectations of households. However, I am reluctant to suggest that this trend of expectational pessimism started before the 2007-09 downturn. One commenter, Diego Espinosa, offered some good reasons to think that since 2009 entrepreneurial expectations have been improving, so that falling real interest rates must be attributed to monetary policy. Although I find it implausible that entrepreneurial expectations have recovered (at least fully) since the 2007-09 downturn, I take Diego’s points seriously, and I am going to try to think through his arguments carefully, and perhaps respond further in a future post.

I also suggested in my previous post that there might be other reasons why real interest rates have been falling, which brings me to the point of this post. By way of disclaimer, I would say that what follows is purely speculative, and I raise it only because the idea seems interesting and worth thinking about, not because I am convinced that it is empirically significant in causing real interest rates to decline over the past two or three decades.

Almost ten months ago, I discussed the basic idea in a post in which I speculated about why there is no evidence of a strong correlation between reductions in marginal income tax rates and economic growth, notwithstanding the seemingly powerful theoretical argument for such a correlation. Relying on Jack Hirshleifer’s important distinction between the social and private value of information, I argued that insofar as reduced marginal tax rates contributed to an expansion of the financial sector of the economy, reduced marginal tax rates may have retarded, rather than spurred, growth.  The problem with the financial sector is that the resources employed in that sector, especially resources devoted to trading, are socially wasted, the profits accruing to trading reflecting not net additions to output, but losses incurred by other traders. In their quest for such gains, trading establishments incur huge expenses with a view to obtaining information advantages by which profits can be extracted as a result of trading with the informationally disadvantaged.

But financial trading is not the only socially wasteful activity that attracted vast amounts of resources from other (socially productive) activities, i.e., making and delivering real goods and services valued by consumers. There’s a whole set of markets that fall under the heading of winner-take-all markets. There are some who attribute increasing income inequality to the recent proliferation of winner-take-all markets. What distinguishes these markets is that, as the name implies, rewards in these markets are very much skewed to the most successful participants. Participants compete for a reward, and rewards are distributed very unevenly, small differences in performance implying very large differences in reward. Because the payoff at the margin to an incremental improvement in performance is so large, the incentives to devote resources to improve performance are inefficiently exaggerated. Because of the gap between the large private return and the near-zero social return from improved performance, far too much effort and resources is wasted on achieving minor gains in performance. Lance Armstrong is but one of the unpleasant outcomes of a winner-take-all market.

It is also worth noting that competition in winner-take-all markets is far from benign. Sports leagues, which are classic examples of winner-take-all markets, operate on the premise that competition must be controlled, not just to prevent match-ups from being too lopsided, but to keep unrestricted competition from driving up costs to uneconomic levels. At one time, major league baseball had a reserve clause. The reserve clause exists no longer, but salary caps and other methods of controlling competition were needed to replace it. The main, albeit covert, function of the NCAA is to suppress competition for college athletes that would render college football and college basketball unprofitable if it were uncontrolled, with player salaries determined by supply and demand.

So if the share of economic activity taking place in winner-take-all markets has increased, the waste of resources associated with such markets has likely been increasing as well. Because of the distortion in the pricing of resources employed in winner-take-all markets, those resources typically receiving more than their net social product, employers in non-winner-take-all markets must pay an inefficient premium to employ those overpaid resources. These considerations suggest that the return on investment in non-winner-take-all markets may also be depressed because of such pricing distortions. But I am not sure that this static distortion has a straightforward implication about the trend of real interest rates over time.

A more straightforward connection between falling real interest rates and the increase in share of resources employed in winner-take-all markets might be that winner-take-all markets (e.g., most of the financial sector) are somehow diverting those most likely to innovate and generate new productive ideas into socially wasteful activities. That hypothesis certainly seems to accord with the oft-heard observation that, until recently at any rate, a disproportionate share of the best and brightest graduates of elite institutions of higher learning have been finding employment on Wall Street and in hedge funds. If so, the rate of technological advance in the productive sector of the economy would have been less rapid than the rate of advance in the unproductive sector of the economy. Somehow that doesn’t seem like a recipe for increasing the rate of economic growth and might even account for declining real interest rates. Something to think about as you watch the Lance Armstrong interview tomorrow night.

Why Are Real Interest Rates So Low, and Will They Ever Bounce Back?

In his recent post commenting on the op-ed piece in the Wall Street Journal by Michael Woodford and Frederic Mishkin on nominal GDP level targeting (hereinafter NGDPLT), Scott Sumner made the following observation.

I would add that Woodford’s preferred interest rate policy instrument is also obsolete.  In the next recession, and probably the one after that, interest rates will again fall to zero.  Indeed the only real suspense is whether they’ll be able to rise significantly above zero before the next recession hits.  In the US in 1937, Japan in 2001, and the eurozone in 2011, rates had barely nudged above zero before the next recession hit. Ryan Avent has an excellent post discussing this issue.

Perhaps I am misinterpreting him, but Scott seems to think that the decline in real interest rates reflects some fundamental change in the economy since approximately the start of the 21st century. Current low real rates, below zero on US Treasuries well up the yield curve. The real rate is unobservable, but it is related to (but not identical with) the yield on TIPS which are now negative up to 10-year maturities. The fall in real rates partly reflects the cyclical tendency for the expected rate of return on new investment to fall in recessions, but real interest rates were falling even before the downturn started in 2007.

In this post, at any rate, Scott doesn’t explain why the real rate of return on investment is falling. In the General Theory, Keynes speculated about the possibility that after the great industrialization of the 19th and early 20th centuries, new opportunities for investment were becoming exhausted. Alvin Hansen, an early American convert to Keynesianism, developed this idea into what he called the secular-stagnation hypothesis, a hypothesis suggesting that, after World War II, even with very low interest rates, the US economy was likely to relapse into depression. The postwar boom seemed to disprove Hansen’s idea, which became a kind of historical curiosity, if not an embarrassment. I wonder if Scott thinks that Keynes and Hansen were just about a half-century ahead of their time, or does he have some other reason in mind for why he thinks that real interest rates are destined to be very low?

One possibility, which, in a sense, is the optimistic take on our current predicament, is that low real interest rates are the result of bad monetary policy, the obstacle to an economic expansion that, in the usual course of events, would raise real interest rates back to more “normal” levels. There are two problems with this interpretation. First, the decline in real interest rates began in the last decade well before the 2007-09 downturn. Second, why does Scott, evidently accepting Ryan Avent’s pessimistic assessment of the life-expectancy of the current recovery notwithstanding rapidly increasing support for NGDPLT, anticipate a relapse into recession before the recovery raises real interest rates above their current near-zero levels? Whatever the explanation, I look forward to hearing more from Scott about all this.

But in the meantime, here are some thoughts of my own about our low real interest rates.

First, it can’t be emphasized too strongly that low real interest rates are not caused by Fed “intervention” in the market. The Fed can buy up all the Treasuries it wants to, but doing so could not force down interest rates if those low interest rates were inconsistent with expected rates of return on investment and the marginal rate of time preference of households. Despite low real interest rates, consumers are not rushing to borrow money at low rates to increase present consumption, nor are businesses rushing to take advantage of low real interest rates to undertake shiny new investment projects. Current low interest rates are a reflection of the expectations of the public about their opportunities for trade-offs between current and future consumption and between current and future production and their expectations about future price levels and interest rates. It is not the Fed that is punishing savers, as the editorial page of the Wall Street Journal constantly alleges. Rather, it is the distilled wisdom of market participants that is determining how much any individual should be rewarded for the act of abstaining from current consumption. Unfortunately, there is so little demand for resources to be used to increase future output, the act of abstaining from current consumption contributes essentially nothing, at the margin, to the increase of future output, which is why the market is now offering next to no reward for a marginal abstention from current consumption.

Second, interest rates reflect the expectations of businesses and investors about the profitability of investing in new capital, and the expectations of households about their future incomes (largely dependent on expectations about future employment). These expectations – about profitability and about future incomes — are distinct, but they are clearly interdependent. If businesses are optimistic about the profitability of future investment, households are likely to be optimistic about future incomes. If households are pessimistic about future incomes, businesses are unlikely to expect investments in new capital to be profitable. If real interest rates are stuck at zero, it suggests that businesses and households are stuck in a mutually reinforcing cycle of pessimistic expectations — households about future income and employment and businesses about the profitability of investing in new capital. Expectations, as I have said before, are fundamental. Low interest rates and secular stagnation need not be the result of an inevitable drying up of investment opportunities; they may be the result of a vicious cycle of mutually reinforcing pessimism by households and businesses.

The simple Keynesian model — at least the Keynesian-cross version of intro textbooks or even the IS-LM version of intermediate textbooks – generally holds expectations constant. But in fact, it is through the adjustment of expectations that full-employment equilibrium is reached. For fiscal or monetary policy to work, they must alter expectations. Conventional calculations of spending or tax multipliers, which implicitly hold expectations constant, miss the point, which is to alter expectations.

Similarly, as I have tried to suggest in my previous two posts, what Friedman called the natural rate of unemployment may itself depend on expectations. A change in monetary policy may alter expectations in a manner that reduces the natural rate. A straightforward application of the natural-rate model leads some to dismiss a reduction in unemployment associated with a small increase in the rate of inflation as inefficient, because the increase in employment results from workers being misled into accepting jobs that will turn out to pay workers a lower real wage than they had expected. But even if that is so, the increase in employment may still be welfare-increasing, because the employment of each worker improves the chances that another worker will become employed. The social benefit of employment may be greater than the private benefit. In that case, the apparent anomaly (from the standpoint of the natural-rate hypothesis) that measurements of social well-being seem to be greatest when employment is maximized actually make perfectly good sense.

In an upcoming post, I hope to explore some other possible explanations for low real interest rates.

The Lucas Critique Revisited

After writing my previous post, I reread Robert Lucas’s classic article “Econometric Policy Evaluation: A Critique,” surely one of the most influential economics articles of the last half century. While the main point of the article was not entirely original, as Lucas himself acknowledged in the article, so powerful was his explanation of the point that it soon came to be known simply as the Lucas Critique. The Lucas Critique says that if a certain relationship between two economic variables has been estimated econometrically, policy makers, in formulating a policy for the future, cannot rely on that relationship to persist once a policy aiming to exploit the relationship is adopted. The motivation for the Lucas Critique was the Friedman-Phelps argument that a policy of inflation would fail to reduce the unemployment rate in the long run, because workers would eventually adjust their expectations of inflation, thereby draining inflation of any stimulative effect. By restating the Friedman-Phelps argument as the application of a more general principle, Lucas reinforced and solidified the natural-rate hypothesis, thereby establishing a key principle of modern macroeconomics.

In my previous post I argued that microeconomic relationships, e.g., demand curves and marginal rates of substitution, are, as a matter of pure theory, not independent of the state of the macroeconomy. In an interdependent economy all variables are mutually determined, so there is no warrant for saying that microrelationships are logically prior to, or even independent of, macrorelationships. If so, then the idea of microfoundations for macroeconomics is misleading, because all economic relationships are mutually interdependent; some relationships are not more basic or more fundamental than others. The kernel of truth in the idea of microfoundations is that there are certain basic principles or axioms of behavior that we don’t think an economic model should contradict, e.g., arbitrage opportunities should not be left unexploited – people should not pass up obvious opportunities, such as mutually beneficial offers of exchange, to increase their wealth or otherwise improve their state of well-being.

So I was curious to how see whether Lucas, while addressing the issue of how price expectations affected output and employment, recognized the possibility that a microeconomic relationship could be dependent on the state of the macroeconomy. For my purposes, the relevant passage occurs in section 5.3 (subtitled “Phillips Curves”) of the paper. After working out the basic theory earlier in the page, Lucas, in section 5, provided three examples of how econometric estimates of macroeconomic relationships would mislead policy makers if the effect of expectations on those relationships were not taken into account. The first two subsections treated consumption expenditures and the investment tax credit. The passage that I want to focus on consists of the first two paragraphs of subsection 5.3 (which I now quote verbatim except for minor changes in Lucas’s notation).

A third example is suggested by the recent controversy over the Phelps-Friedman hypothesis that permanent changes in the inflation rate will not alter the average rate of unemployment. Most of the major econometric models have been used in simulation experiments to test this proposition; the results are uniformly negative. Since expectations are involved in an essential way in labor and product market supply behavior, one would presumed, on the basis of the considerations raised in section 4, that these tests are beside the point. This presumption is correct, as the following example illustrates.

It will be helpful to utilize a simple, parametric model which captures the main features of the expectational view of aggregate supply – rational agents, cleared markets, incomplete information. We imagine suppliers of goods to be distributed over N distinct markets i, I = 1, . . ., N. To avoid index number problems, suppose that the same (except for location) good is traded in each market, and let y_it be the log of quantity supplied in market i in period t. Assume, further, that the supply y_it is composed of two factors

y_it = Py_it + Cy_it,

where Py_it denotes normal or permanent supply, and Cy_it cyclical or transitory supply (both again in logs). We take Py_it to be unresponsive to all but permanent relative price changes or, since the latter have been defined away by assuming a single good, simply unresponsive to price changes. Transitory supply Cy_it varies with perceived changes in the relative price of goods in i:

Cy_it = β(p_it – Ep_it),

where p_it is the log of the actual price in i at time t, and Ep_it is the log of the general (geometric average) price level in the economy as a whole, as perceived in market i.

Let’s take a moment to ponder the meaning of Lucas’s simplifying assumption that there is just one good. Relative prices (except for spatial differences in an otherwise identical good) are fixed by assumption; a disequilibrium (or suboptimal outcome) can arise only because of misperceptions of the aggregate price level. So, by explicit assumption, Lucas rules out the possibility that any microeconomic relationship depends on macroeconomic conditions. Note also that Lucas does not provide an account of the process by which market prices are established at each location, nothing being said about demand conditions. For example, if suppliers at location i perceive a price (transitorily) above the equilibrium price, and respond by (mistakenly) increasing output, thereby increasing their earnings, do those suppliers increase their demand to consume output? Suppose suppliers live and purchase at locations other than where they are supplying product, so that a supplier at location i purchases at location j, where i does not equal j. If a supplier at location i perceives an increase in price at location i, will his demand to purchase the good at location j increase as well? Will the increase in demand at location j cause an increase in the price at location j? What if there is a one-period lag between supplier receipts and their consumption demands? Lucas provides no insight into these possible ambiguities in his model.

Stated more generally, the problem with Lucas’s example is that it seems to be designed to exclude a priori the possibility of every type of disequilibrium but one, a disequilibrium corresponding to a single type of informational imperfection. Reasoning on the basis of that narrow premise, Lucas shows that, under a given expectation of the future price level, an econometrician would find a positive correlation between the price level and output — a negatively sloped Phillips Curve. Yet, under the same assumptions, Lucas also shows that an anticipated policy to raise the rate of inflation would fail to raise output (or, by implication, increase employment). But, given his very narrow underlying assumptions, it seems plausible to doubt the robustness of Lucas’s conclusion. Proving the validity of a proposition requires more than constructing an example in which the proposition is shown to be valid. That would be like trying to prove that the sides of every triangle are equal in length by constructing a triangle whose angles are all equal to 60 degrees, and then claiming that, because the sides of that triangle are equal in length, the sides of all triangles are equal in length.

Perhaps a better model than the one Lucas posited would have been one in which the amount supplied in each market was positively correlated with the amount supplied in every other market, inasmuch as an increase (decrease) in the amount supplied in one market will tend to increase (decrease) demand in other markets. In that case, I conjecture, deviations from permanent supply would tend to be cumulative (though not necessarily permanent), implying a more complex propagation mechanism than Lucas’s simple model does. Nor is it obvious to me how the equilibrium of such a model would compare to the equilibrium in the Lucas model. It does not seem inconceivable that a model could be constructed in which equilibrium output depended on the average price level. But this is just conjecture on my part, because I haven’t tried to write out and solve such a model. Perhaps an interested reader out there will try to work it out and report back to us on the results.

PS:  Congratulations to Scott Sumner on his excellent op-ed on nominal GDP level targeting in today’s Financial Times.

The State We’re In

Last week, Paul Krugman, set off by this blog post, complained about the current state macroeconomics. Apparently, Krugman feels that if saltwater economists like himself were willing to accommodate the intertemporal-maximization paradigm developed by the freshwater economists, the freshwater economists ought to have reciprocated by acknowledging some role for countercyclical policy. Seeing little evidence of accommodation on the part of the freshwater economists, Krugman, evidently feeling betrayed, came to this rather harsh conclusion:

The state of macro is, in fact, rotten, and will remain so until the cult that has taken over half the field is somehow dislodged.

Besides engaging in a pretty personal attack on his fellow economists, Krugman did not present a very flattering picture of economics as a scientific discipline. What Krugman describes seems less like a search for truth than a cynical bargaining game, in which Krugman feels that his (saltwater) side, after making good faith offers of cooperation and accommodation that were seemingly accepted by the other (freshwater) side, was somehow misled into making concessions that undermined his side’s strategic position. What I found interesting was that Krugman seemed unaware that his account of the interaction between saltwater and freshwater economists was not much more flattering to the former than the latter.

Krugman’s diatribe gave Stephen Williamson an opportunity to scorn and scold Krugman for a crass misunderstanding of the progress of science. According to Williamson, modern macroeconomics has passed by out-of-touch old-timers like Krugman. Among modern macroeconomists, Williamson observes, the freshwater-saltwater distinction is no longer meaningful or relevant. Everyone is now, more or less, on the same page; differences are worked out collegially in seminars, workshops, conferences and in the top academic journals without the rancor and disrespect in which Krugman indulges himself. If you are lucky (and hard-working) enough to be part of it, macroeconomics is a great place to be. One can almost visualize the condescension and the pity oozing from Williamson’s pores for those not part of the charmed circle.

Commenting on this exchange, Noah Smith generally agreed with Williamson that modern macroeconomics is not a discipline divided against itself; the intetermporal maximizers are clearly dominant. But Noah allows himself to wonder whether this is really any cause for celebration – celebration, at any rate, by those not in the charmed circle.

So macro has not yet discovered what causes recessions, nor come anywhere close to reaching a consensus on how (or even if) we should fight them. . . .

Given this state of affairs, can we conclude that the state of macro is good? Is a field successful as long as its members aren’t divided into warring camps? Or should we require a science to give us actual answers? And if we conclude that a science isn’t giving us actual answers, what do we, the people outside the field, do? Do we demand that the people currently working in the field start producing results pronto, threatening to replace them with people who are currently relegated to the fringe? Do we keep supporting the field with money and acclaim, in the hope that we’re currently only in an interim stage, and that real answers will emerge soon enough? Do we simply conclude that the field isn’t as fruitful an area of inquiry as we thought, and quietly defund it?

All of this seems to me to be a side issue. Who cares if macroeconomists like each other or hate each other? Whether they get along or not, whether they treat each other nicely or not, is really of no great import. For example, it was largely at Milton Friedman’s urging that Harry Johnson was hired to be the resident Keynesian at Chicago. But almost as soon as Johnson arrived, he and Friedman were getting into rather unpleasant personal exchanges and arguments. And even though Johnson underwent a metamorphosis from mildly left-wing Keynesianism to moderately conservative monetarism during his nearly two decades at Chicago, his personal and professional relationship with Friedman got progressively worse. And all of that nastiness was happening while both Friedman and Johnson were becoming dominant figures in the economics profession. So what does the level of collegiality and absence of personal discord have to do with the state of a scientific or academic discipline? Not all that much, I would venture to say.

So when Scott Sumner says:

while Krugman might seem pessimistic about the state of macro, he’s a Pollyanna compared to me. I see the field of macro as being completely adrift

I agree totally. But I diagnose the problem with macro a bit differently from how Scott does. He is chiefly concerned with getting policy right, which is certainly important, inasmuch as policy, since early 2008, has, for the most part, been disastrously wrong. One did not need a theoretically sophisticated model to see that the FOMC, out of misplaced concern that inflation expectations were becoming unanchored, kept money way too tight in 2008 in the face of rising food and energy prices, even as the economy was rapidly contracting in the second and third quarters. And in the wake of the contraction in the second and third quarters and a frightening collapse and panic in the fourth quarter, it did not take a sophisticated model to understand that rapid monetary expansion was called for. That’s why Scott writes the following:

All we really know is what Milton Friedman knew, with his partial equilibrium approach. Monetary policy drives nominal variables.  And cyclical fluctuations caused by nominal shocks seem sub-optimal.  Beyond that it’s all conjecture.

Ahem, and Marshall and Wicksell and Cassel and Fisher and Keynes and Hawtrey and Robertson and Hayek and at least 25 others that I could easily name. But it’s interesting to note that, despite his Marshallian (anti-Walrasian) proclivities, it was Friedman himself who started modern macroeconomics down the fruitless path it has been following for the last 40 years when he introduced the concept of the natural rate of unemployment in his famous 1968 AEA Presidential lecture on the role of monetary policy. Friedman defined the natural rate of unemployment as:

the level [of unemployment] that would be ground out by the Walrasian system of general equilibrium equations, provided there is embedded in them the actual structural characteristics of the labor and commodity markets, including market imperfections, stochastic variability in demands and supplies, the costs of gathering information about job vacancies, and labor availabilities, the costs of mobility, and so on.

Aside from the peculiar verb choice in describing the solution of an unknown variable contained in a system of equations, what is noteworthy about his definition is that Friedman was explicitly adopting a conception of an intertemporal general equilibrium as the unique and stable solution of that system of equations, and, whether he intended to or not, appeared to be suggesting that such a concept was operationally useful as a policy benchmark. Thus, despite Friedman’s own deep skepticism about the usefulness and relevance of general-equilibrium analysis, Friedman, for whatever reasons, chose to present his natural-rate argument in the language (however stilted on his part) of the Walrasian general-equilibrium theory for which he had little use and even less sympathy.

Inspired by the powerful policy conclusions that followed from the natural-rate hypothesis, Friedman’s direct and indirect followers, most notably Robert Lucas, used that analysis to transform macroeconomics, reducing macroeconomics to the manipulation of a simplified intertemporal general-equilibrium system. Under the assumption that all economic agents could correctly forecast all future prices (aka rational expectations), all agents could be viewed as intertemporal optimizers, any observed unemployment reflecting the optimizing choices of individuals to consume leisure or to engage in non-market production. I find it inconceivable that Friedman could have been pleased with the direction taken by the economics profession at large, and especially by his own department when he departed Chicago in 1977. This is pure conjecture on my part, but Friedman’s departure upon reaching retirement age might have had something to do with his own lack of sympathy with the direction that his own department had, under Lucas’s leadership, already taken. The problem was not so much with policy, but with the whole conception of what constitutes macroeconomic analysis.

The paper by Carlaw and Lipsey, which I referenced in my previous post, provides just one of many possible lines of attack against what modern macroeconomics has become. Without in any way suggesting that their criticisms are not weighty and serious, I would just point out that there really is no basis at all for assuming that the economy can be appropriately modeled as being in a continuous, or nearly continuous, state of general equilibrium. In the absence of a complete set of markets, the Arrow-Debreu conditions for the existence of a full intertemporal equilibrium are not satisfied, and there is no market mechanism that leads, even in principle, to a general equilibrium. The rational-expectations assumption is simply a deus-ex-machina method by which to solve a simplified model, a method with no real-world counterpart. And the suggestion that rational expectations is no more than the extension, let alone a logical consequence, of the standard rationality assumptions of basic economic theory is transparently bogus. Nor is there any basis for assuming that, if a general equilibrium does exist, it is unique, and that if it is unique, it is necessarily stable. In particular, in an economy with an incomplete (in the Arrow-Debreu sense) set of markets, an equilibrium may very much depend on the expectations of agents, expectations potentially even being self-fulfilling. We actually know that in many markets, especially those characterized by network effects, equilibria are expectation-dependent. Self-fulfilling expectations may thus be a characteristic property of modern economies, but they do not necessarily produce equilibrium.

An especially pretentious conceit of the modern macroeconomics of the last 40 years is that the extreme assumptions on which it rests are the essential microfoundations without which macroeconomics lacks any scientific standing. That’s preposterous. Perfect foresight and rational expectations are assumptions required for finding the solution to a system of equations describing a general equilibrium. They are not essential properties of a system consistent with the basic rationality propositions of microeconomics. To insist that a macroeconomic theory must correspond to the extreme assumptions necessary to prove the existence of a unique stable general equilibrium is to guarantee in advance the sterility and uselessness of that theory, because the entire field of study called macroeconomics is the result of long historical experience strongly suggesting that persistent, even cumulative, deviations from general equilibrium have been routine features of economic life since at least the early 19th century. That modern macroeconomics can tell a story in which apparently large deviations from general equilibrium are not really what they seem is not evidence that such deviations don’t exist; it merely shows that modern macroeconomics has constructed a language that allows the observed data to be classified in terms consistent with a theoretical paradigm that does not allow for lapses from equilibrium. That modern macroeconomics has constructed such a language is no reason why anyone not already committed to its underlying assumptions should feel compelled to accept its validity.

In fact, the standard comparative-statics propositions of microeconomics are also based on the assumption of the existence of a unique stable general equilibrium. Those comparative-statics propositions about the signs of the derivatives of various endogenous variables (price, quantity demanded, quantity supplied, etc.) with respect to various parameters of a microeconomic model involve comparisons between equilibrium values of the relevant variables before and after the posited parametric changes. All such comparative-statics results involve a ceteris-paribus assumption, conditional on the existence of a unique stable general equilibrium which serves as the starting and ending point (after adjustment to the parameter change) of the exercise, thereby isolating the purely hypothetical effect of a parameter change. Thus, as much as macroeconomics may require microfoundations, microeconomics is no less in need of macrofoundations, i.e., the existence of a unique stable general equilibrium, absent which a comparative-statics exercise would be meaningless, because the ceteris-paribus assumption could not otherwise be maintained. To assert that macroeconomics is impossible without microfoundations is therefore to reason in a circle, the empirically relevant propositions of microeconomics being predicated on the existence of a unique stable general equilibrium. But it is precisely the putative failure of a unique stable intertemporal general equilibrium to be attained, or to serve as a powerful attractor to economic variables, that provides the rationale for the existence of a field called macroeconomics.

So I certainly agree with Krugman that the present state of macroeconomics is pretty dismal. However, his own admitted willingness (and that of his New Keynesian colleagues) to adopt a theoretical paradigm that assumes the perpetual, or near-perpetual, existence of a unique stable intertemporal equilibrium, or at most admits the possibility of a very small set of deviations from such an equilibrium, means that, by his own admission, Krugman and his saltwater colleagues also bear a share of the responsibility for the very state of macroeconomics that Krugman now deplores.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 665 other subscribers
Follow Uneasy Money on WordPress.com