Archive for January, 2013

John Taylor, Post-Modern Monetary Theorist

In the beginning, there was Keynesian economics; then came Post-Keynesian economics.  After Post-Keynesian economics, came Modern Monetary Theory.  And now it seems, John Taylor has discovered Post-Modern Monetary Theory.

What, you may be asking yourself, is Post-Modern Monetary Theory all about? Great question!  In a recent post, Scott Sumner tried to deconstruct Taylor’s position, and found himself unable to determine just what it is that Taylor wants in the way of monetary policy.  How post-modern can you get?

Taylor is annoyed that the Fed is keeping interest rates too low by a policy of forward guidance, i.e., promising to keep short-term interest rates close to zero for an extended period while buying Treasuries to support that policy.

And yet—unlike its actions taken during the panic—the Fed’s policies have been accompanied by disappointing outcomes. While the Fed points to external causes, it ignores the possibility that its own policy has been a factor.

At this point, the alert reader is surely anticipating an explanation of why forward guidance aimed at reducing the entire term structure of interest rates, thereby increasing aggregate demand, has failed to do so, notwithstanding the teachings of both Keynesian and non-Keynesian monetary theory.  Here is Taylor’s answer:

At the very least, the policy creates a great deal of uncertainty. People recognize that the Fed will eventually have to reverse course. When the economy begins to heat up, the Fed will have to sell the assets it has been purchasing to prevent inflation.

Taylor seems to be suggesting that, despite low interest rates, the public is not willing to spend because of increased uncertainty.  But why wasn’t the public spending more in the first place, before all that nasty forward guidance?  Could it possibly have had something to do with business pessimism about demand and household pessimism about employment?  If the problem stems from an underlying state of pessimistic expectations about the future, the question arises whether Taylor considers such pessimism to be an element of, or related to, uncertainty?

I don’t know the answer, but Taylor posits that the public is assuming that the Fed’s policy will have to be reversed at some point. Why? Because the economy will “heat up.” As an economic term, the verb “to heat up” is pretty vague, but it seems to connote, at the very least, increased spending and employment. Which raises a further question: given a state of pessimistic expectations about future demand and employment, does a policy that, by assumption, increases the likelihood of additional spending and employment create uncertainty or diminish it?

It turns out that Taylor has other arguments for the ineffectiveness of forward guidance.  We can safely ignore his two throw-away arguments about on-again off-again asset purchases, and the tendency of other central banks to follow Fed policy.  A more interesting reason is provided when Taylor compares Fed policy to a regulatory price ceiling.

[I]f investors are told by the Fed that the short-term rate is going to be close to zero in the future, then they will bid down the yield on the long-term bond. The forward guidance keeps the long-term rate low and tends to prevent it from rising. Effectively the Fed is imposing an interest-rate ceiling on the longer-term market by saying it will keep the short rate unusually low.

The perverse effect comes when this ceiling is below what would be the equilibrium between borrowers and lenders who normally participate in that market. While borrowers might like a near-zero rate, there is little incentive for lenders to extend credit at that rate.

This is much like the effect of a price ceiling in a rental market where landlords reduce the supply of rental housing. Here lenders supply less credit at the lower rate. The decline in credit availability reduces aggregate demand, which tends to increase unemployment, a classic unintended consequence of the policy.

When economists talk about a price ceiling what they usually mean is that there is some legal prohibition on transactions between willing parties at a price above a specified legal maximum price.  If the prohibition is enforced, as are, for example, rent ceilings in New York City, some people trying to rent apartments will be unable to do so, even though they are willing to pay as much, or more, than others are paying for comparable apartments.  The only rates that the Fed is targeting, directly or indirectly, are those on US Treasuries at various maturities.  All other interest rates in the economy are what they are because, given the overall state of expectations, transactors are voluntarily agreeing to the terms reflected in those rates.  For any given class of financial instruments, everyone willing to purchase or sell those instruments at the going rate is able to do so.  For Professor Taylor to analogize this state of affairs to a price ceiling is not only novel, it  is thoroughly post-modern.

Nunes and Cole Write the E-Book on Market Monetarism

This post is slightly late in coming, but I want to give my fellow bloggers and valued commenters on this blog, Marcus Nunes and Benjamin Cole a shout out and my warmest congratulations on the publication, last week, of their new e-book Market Monetarism: Roadmap to Economic Prosperity.

I have not yet read the entire book, but I did read the introductory chapter available on Amazon, and I was impressed, but not surprised, by their wide knowledge and understanding of monetary economics as well as their clear, direct and engaging style. I was also pleased to find that they gave due recognition to Gustav Cassel, Ralph Hawtrey, and James Meade for their important contributions. Nor do I hold it against them that they quoted from my paper on Hawtrey and Cassel, though they did forget to mention the name of my co-author, Ron Batchelder.

Way to go, guys.

Charles Goodhart on Nominal GDP Targeting

Charles Goodhart might just be the best all-around monetary economist in the world, having made impressive contributions to both monetary theory and the history of monetary theory, to monetary history, and the history of monetary institutions, especially of central banking, and to the theory and, in his capacity as chief economist of the Bank of England, practice of monetary policy. So whenever Goodhart offers his views on monetary policy, it is a good idea to pay close attention to what he says. But if there is anything to be learned from the history of economics (and I daresay the history of any scientific discipline), it is that nobody ever gets it right all the time. It’s nice to have a reputation, but sadly reputation provides no protection from error.

In response to the recent buzz about targeting nominal GDP, Goodhart, emeritus professor at the London School of Economics and an adviser to Morgan Stanley along with two Morgan Stanley economists, Jonathan Ashworth and Melanie Baker, just published a critique of a recent speech by Mark Carney, Governor-elect of the Bank of England, in which Carney seemed to endorse targeting the level of nominal GDP (hereinafter NGDPLT). (See also Marcus Nunes’s excellent post about Goodhart et al.) Goodhart et al. have two basic complaints about NGDPLT. The first one is that our choice of an initial target level (i.e., do we think that current NGDP is now at its target or away from its target and if so by how much) and of the prescribed growth in the target level over time would itself create destabilizing uncertainty in the process of changing to an NGDPLT monetary regime. The key distinction between a level target and a growth-rate target is that the former requires a subsequent compensatory adjustment for any deviation from the target while the latter requires no such adjustment for a deviation from the target. Because deviations will occur under any targeting regime, Goodhart et al. worry that the compensatory adjustments required by NGDPLT could trigger destabilizing gyrations in NGDP growth, especially if expectations, as they think likely, became unanchored.

This concern seems easily enough handled if the monetary authority is given say a 1-1.5% band around its actual target within which to operate. Inevitable variations around the target would not automatically require an immediate rapid compensatory adjustment. As long as the monetary authority remained tolerably close to its target, it would not be compelled to make a sharp policy adjustment. A good driver does not always drive down the middle of his side of the road, the driver uses all the space available to avoid having to make an abrupt changes in the direction in which the car is headed. The same principle would govern the decisions of a skillful monetary authority.

Another concern of Goodhart et al. is that the choice of the target growth rate of NGDP depends on how much real growth,we think the economy is capable of. If real growth of 3% a year is possible, then the corresponding NGDP level target depends on how much inflation policy makers believe necessary to achieve that real GDP growth rate. If the “correct” rate of inflation is about 2%, then the targeted level of NGDP should grow at 5% a year. But Goodhart et al. are worried that achievable growth may be declining. If so, NGDPLT at 5% a year will imply more than 2% annual inflation.

Effectively, any overestimation of the sustainable real rate of growth, and such overestimation is all too likely, could force an MPC [monetary policy committee], subject to a level nominal GDP target, to soon have to aim for a significantly higher rate of inflation. Is that really what is now wanted? Bring back the stagflation of the 1970s; all is forgiven?

With all due respect, I find this concern greatly overblown. Even if the expectation of 3% real growth is wildly optimistic, say 2% too high, a 5% NGDP growth path would imply only 4% inflation. That might be too high a rate for Goodhart’s taste, or mine for that matter, but it would be a far cry from the 1970s, when inflation was often in the double-digits. Paul Volcker achieved legendary status in the annals of central banking by bringing the US rate of inflation down to 3.5 to 4%, so one needs to maintain some sense of proportion in these discussions.

Finally, Goodhart et al. invoke the Phillips Curve.

[A]n NGDP target would appear to run counter to the previously accepted tenets of monetary theory. Perhaps the main claim of monetary economics, as persistently argued by Friedman, and the main reason for having an independent Central Bank, is that over the medium and longer term monetary forces influence only monetary variables. Other real (e.g. supply-side) factors determine growth; the long-run Phillips curve is vertical. Do those advocating a nominal GDP target now deny that? Do they really believe that faster inflation now will generate a faster, sustainable, medium- and longer-term growth rate?

While it is certainly undeniable that Friedman showed, as, in truth, many others had before him, that, for an economy in approximate full-employment equilibrium, increased inflation cannot permanently reduce unemployment, it is far from obvious (to indulge in bit of British understatement) that we are now in a state of full-employment equilibrium. If the economy is not now in full-employment equilibrium, the idea that monetary-neutrality propositions about money influencing only monetary, but not real, variables in the medium and longer term are of no relevance to policy. Those advocating a nominal GDP target need not deny that the long-run Phillips Curve is vertical, though, as I have argued previously (here, here, and here) the proposition that the long-run Phillips Curve is vertical is very far from being the natural law that Goodhart and many others seem to regard it as. And if Goodhart et al. believe that we in fact are in a state of full-employment equilibrium, then they ought to say so forthrightly, and they ought to make an argument to justify that far from obvious characterization of the current state of affairs.

Having said all that, I do have some sympathy with the following point made by Goodhart et al.

Given our uncertainty about sustainable growth, an NGDP target also has the obvious disadvantage that future certainty about inflation becomes much less than under an inflation (or price level) target. In order to estimate medium- and longer-term inflation rates, one has first to take some view about the likely sustainable trends in future real output. The latter is very difficult to do at the best of times, and the present is not the best of times. So shifting from an inflation to a nominal GDP growth target is likely to have the effect of raising uncertainty about future inflation and weakening the anchoring effect on expectations of the inflation target.

That is one reason why in my book Free Banking and Monetary Reform, I advocated Earl Thompson’s proposal for a labor standard aimed at stabilizing average wages (or, more precisely, average expected wages). But if you stabilize wages, and productivity is falling, then prices must rise. That’s just a matter of arithmetic. But there is no reason why the macroeconomically optimal rate of inflation should be invariant with respect to the rate of technological progress.

HT:  Bill Woolsey

The Social Cost of Finance

Noah Smith has a great post that bears on the topic that I have been discussing of late (here and here): whether the growth of the US financial sector over the past three decades had anything to do with the decline in the real rate of interest that seems to have occurred over the same period. I have been suggesting that there may be reason to believe that the growth in the financial sector (from about 5% of GDP in 1980 to 8% in 2007) has reduced the productivity of the rest of the economy, because a not insubstantial part of the earnings of the financial sector has been extracted from relatively unsophisticated, informationally disadvantaged, traders and customers. Much of what financial firms do is aimed at obtaining an information advantage from which profit can be extracted, just as athletes devote resources to gaining a competitive advantage. The resources devoted to gaining informational advantage are mostly wasted, being used to transfer, not create, wealth. This seems to be true as a matter of theory; what is less clear is whether enough resources have been wasted to cause a non-negligible deterioration in economic performance.

Noah underscores the paucity of our knowledge by referring to two papers, one by Robin Greenwood and David Scharfstein (recently published in the Journal of Economic Perspectives) and the other, a response by John Cochrane posted on his blog (see here for the PDF). The Greewood and Scharfstein paper provides theoretical arguments and evidence that tend to support the proposition that the US financial sector is too large. Here is how they sum up their findings.

First, a large part of the growth of finance is in asset management, which has brought many benefits including, most notably, increased diversification and household participation in the stock market. This has likely lowered required rates of return on risky securities, increased valuations, and lowered the cost of capital to corporations. The biggest beneficiaries were likely young firms, which stand to gain the most when discount rates fall. On the other hand, the enormous growth of asset management after 1997 was driven by high fee alternative investments, with little direct evidence of much social benefit, and potentially large distortions in the allocation of talent. On net, society is likely better off because of active asset management but, on the margin, society would be better off if the cost of asset management could be reduced.

Second, changes in the process of credit delivery facilitated the expansion of household credit, mainly in residential mortgage credit. This led to higher fee income to the financial sector. While there may be benefits of expanding access to mortgage credit and lowering its cost, we point out that the U.S. tax code already biases households to overinvest in residential real estate. Moreover, the shadow banking system that facilitated this expansion made the financial system more fragile.

In his response, Cochrane offers a number of reasons why Greenwood and Scharfstein are understating the benefits generated by active asset management. Here is a passage from Cochrane’s paper (quoted also by Noah) that I would like to focus on.

I conclude that information trading of this sort sits at the conflict of two externalities / public goods. On the one hand, as French points out, “price impact” means that traders are not able to appropriate the full value of the information they bring, so there can be too few resources devoted to information production (and digestion, which strikes me as far more important). On the other hand, as Greenwood and Scharfstein point out, information is a non-rival good, and its exploitation in financial markets is a tournament (first to use it gets all the benefit) so the theorem that profits you make equal the social benefit of its production is false. It is indeed a waste of resources to bring information to the market a few minutes early, when that information will be revealed for free a few minutes later. Whether we have “too much” trading, too many resources devoted to finding information that somebody already has in will be revealed in a few minutes, or “too little” trading, markets where prices go for long times not reflecting important information, as many argued during the financial crisis, seems like a topic which neither theory nor empirical work has answered with any sort of clarity.

Cochrane’s characterization of information trading as a public good is not wrong, inasmuch as we all benefit from the existence of markets for goods and assets, even those of us that don’t participate routinely (or ever) in those markets, first because the existence of those markets provides us with opportunities to trade that may, at some unknown future time, become very valuable to us, and second, because the existence of markets contributes to the efficient utilization of resources, thereby increasing the total value of output. Because the existence of markets is a kind of public good, it may be true that even more market trading than now occurs would be socially beneficial. Suppose that every trade involves a transaction cost of 5 cents, and that the transactions cost prevents at least one trade from taking place, because the expected gain to the traders from that trade would only be 4 cents. But since that unconsummated trade would also confer a benefit on third parties, by improving the allocation of resources ever so slightly, causing total output to rise by, say, 3 cents, it would be worth it to the rest of us to subsidize parties to that unconsummated trade by rebating some part of the transactions cost associated with that trade.

But here’s my problem with Cochrane’s argument. Let us imagine that there is some unique social optimum, or at least a defined set of Pareto-optimal allocations, which we are trying to attain, or to come as close as possible to. The existence of functioning markets certainly helps us come closer to the set of Pareto optimal allocations than if markets did not exist. Cochrane is suggesting that, by devoting more resources to the production of information (which in a basically free-market, private-property economy involves the creation private informational advantages) we get more trading, and with more trading we come closer to the set of Pareto-optimal allocations than with less trading. However, it seems plausible that the production of additional information and the increase in trading activity is subject to diminishing returns in the sense that eventually obtaining additional information and engaging in additional trades reduces the distance between the actual allocation and the set of Pareto-optimal allocations by successively smaller amounts. Otherwise, we would in fact reach Pareto optimality. So, as we devote more and more resources to producing information and to trading, the amount of public-good co-generation must diminish. But this means that the negative externality associated with using increasing amounts of resources to produce private informational advantages must at some point — and probably fairly quickly — overwhelm the public-good co-generated by increased trading.

So although Cochrane has a theoretical point that, without more evidence than we have now, we can’t necessarily be sure that the increase in resources devoted to finance has been associated with a net social loss, I am still inclined to suspect doubt strongly that, at the margin, there are net positive social benefits from adding resources to finance. In this regard, the paper (cited by Greenwood and Scharfstein) “The Allocation of Talent: Implications for Growth” by Kevin Murphy, Andrei Shleifer and Robert Vishny.

Falling Real Interest Rates, Winner-Take-All Markets, and Lance Armstrong

In my previous post, I suggested that real interest rates are largely determined by expectations, entrepreneurial expectations of profit and household expectations of future income. Increased entrepreneurial optimism implies that entrepreneurs are revising upwards the anticipated net cash flows from the current stock of capital assets, in other words an increasing demand for capital assets. Because the stock of capital assets doesn’t change much in the short run, an increased demand for those assets tends, in the short run, to raise real interest rates as people switch from fixed income assets (bonds) into the real assets associated with increased expected net cash flows. Increased optimism by households about their future income prospects implies that their demand for long-lived assets, real or financial, tends to decline as household devote an increased share of current income to present consumption and less to saving for future consumption, because an increase in future income reduces the amount of current savings needed to achieve a given level of future consumption. The more optimistic I am about my future income, the less I will save in the present. If I win the lottery, I will start spending even before I collect my winnings. The reduced household demand for long-lived assets with which to provide for future consumption reduces the value of such assets, implying, for given expectations of their future yields, an increased real interest rate.

This is the appropriate neoclassical (Fisherian) framework within which to think about the determination of real interest rates. The Fisherian theory may not be right, but I don’t think that we have another theory of comparable analytical power and elegance. Other theories are just ad hoc, and lack the aesthetic appeal of the Fisherian theory. Alas, the world is a messy place, and we have no guarantee that the elegant theory will always win out. Truth and beauty need not the same. (Sigh!)

Commenting on my previous post, Joshua Wojnilower characterized my explanation as “a combination of a Keynesian-demand side story in the first paragraph and an Austrian/Lachmann subjective expectations view in the second section.” I agree that Keynes emphasized the importance of changes in the state of entrepreneurial expectations in causing shifts in the marginal efficiency of capital, and that Austrian theory is notable for its single-minded emphasis on the subjectivity of expectations. But these ideas are encompassed by the Fisherian neoclassical paradigm, entrepreneurial expectations about profits determining the relevant slope of the production possibility curve embodying opportunities for the current and future production of consumption goods on the one hand, and household expectations about future income determining the slope of household indifference curves reflecting their willingness to exchange current for future consumption. So it’s all in Fisher.

Thus, as I observed, falling real interest rates could be explained, under the Fisherian theory, by deteriorating entrepreneurial expectations, or by worsening household expectations about future income (employment). In my previous post, I suggested that, at least since the 2007-09 downturn, entrepreneurial profit expectations have been declining along with the income (employment) expectations of households. However, I am reluctant to suggest that this trend of expectational pessimism started before the 2007-09 downturn. One commenter, Diego Espinosa, offered some good reasons to think that since 2009 entrepreneurial expectations have been improving, so that falling real interest rates must be attributed to monetary policy. Although I find it implausible that entrepreneurial expectations have recovered (at least fully) since the 2007-09 downturn, I take Diego’s points seriously, and I am going to try to think through his arguments carefully, and perhaps respond further in a future post.

I also suggested in my previous post that there might be other reasons why real interest rates have been falling, which brings me to the point of this post. By way of disclaimer, I would say that what follows is purely speculative, and I raise it only because the idea seems interesting and worth thinking about, not because I am convinced that it is empirically significant in causing real interest rates to decline over the past two or three decades.

Almost ten months ago, I discussed the basic idea in a post in which I speculated about why there is no evidence of a strong correlation between reductions in marginal income tax rates and economic growth, notwithstanding the seemingly powerful theoretical argument for such a correlation. Relying on Jack Hirshleifer’s important distinction between the social and private value of information, I argued that insofar as reduced marginal tax rates contributed to an expansion of the financial sector of the economy, reduced marginal tax rates may have retarded, rather than spurred, growth.  The problem with the financial sector is that the resources employed in that sector, especially resources devoted to trading, are socially wasted, the profits accruing to trading reflecting not net additions to output, but losses incurred by other traders. In their quest for such gains, trading establishments incur huge expenses with a view to obtaining information advantages by which profits can be extracted as a result of trading with the informationally disadvantaged.

But financial trading is not the only socially wasteful activity that attracted vast amounts of resources from other (socially productive) activities, i.e., making and delivering real goods and services valued by consumers. There’s a whole set of markets that fall under the heading of winner-take-all markets. There are some who attribute increasing income inequality to the recent proliferation of winner-take-all markets. What distinguishes these markets is that, as the name implies, rewards in these markets are very much skewed to the most successful participants. Participants compete for a reward, and rewards are distributed very unevenly, small differences in performance implying very large differences in reward. Because the payoff at the margin to an incremental improvement in performance is so large, the incentives to devote resources to improve performance are inefficiently exaggerated. Because of the gap between the large private return and the near-zero social return from improved performance, far too much effort and resources is wasted on achieving minor gains in performance. Lance Armstrong is but one of the unpleasant outcomes of a winner-take-all market.

It is also worth noting that competition in winner-take-all markets is far from benign. Sports leagues, which are classic examples of winner-take-all markets, operate on the premise that competition must be controlled, not just to prevent match-ups from being too lopsided, but to keep unrestricted competition from driving up costs to uneconomic levels. At one time, major league baseball had a reserve clause. The reserve clause exists no longer, but salary caps and other methods of controlling competition were needed to replace it. The main, albeit covert, function of the NCAA is to suppress competition for college athletes that would render college football and college basketball unprofitable if it were uncontrolled, with player salaries determined by supply and demand.

So if the share of economic activity taking place in winner-take-all markets has increased, the waste of resources associated with such markets has likely been increasing as well. Because of the distortion in the pricing of resources employed in winner-take-all markets, those resources typically receiving more than their net social product, employers in non-winner-take-all markets must pay an inefficient premium to employ those overpaid resources. These considerations suggest that the return on investment in non-winner-take-all markets may also be depressed because of such pricing distortions. But I am not sure that this static distortion has a straightforward implication about the trend of real interest rates over time.

A more straightforward connection between falling real interest rates and the increase in share of resources employed in winner-take-all markets might be that winner-take-all markets (e.g., most of the financial sector) are somehow diverting those most likely to innovate and generate new productive ideas into socially wasteful activities. That hypothesis certainly seems to accord with the oft-heard observation that, until recently at any rate, a disproportionate share of the best and brightest graduates of elite institutions of higher learning have been finding employment on Wall Street and in hedge funds. If so, the rate of technological advance in the productive sector of the economy would have been less rapid than the rate of advance in the unproductive sector of the economy. Somehow that doesn’t seem like a recipe for increasing the rate of economic growth and might even account for declining real interest rates. Something to think about as you watch the Lance Armstrong interview tomorrow night.

Why Are Real Interest Rates So Low, and Will They Ever Bounce Back?

In his recent post commenting on the op-ed piece in the Wall Street Journal by Michael Woodford and Frederic Mishkin on nominal GDP level targeting (hereinafter NGDPLT), Scott Sumner made the following observation.

I would add that Woodford’s preferred interest rate policy instrument is also obsolete.  In the next recession, and probably the one after that, interest rates will again fall to zero.  Indeed the only real suspense is whether they’ll be able to rise significantly above zero before the next recession hits.  In the US in 1937, Japan in 2001, and the eurozone in 2011, rates had barely nudged above zero before the next recession hit. Ryan Avent has an excellent post discussing this issue.

Perhaps I am misinterpreting him, but Scott seems to think that the decline in real interest rates reflects some fundamental change in the economy since approximately the start of the 21st century. Current low real rates, below zero on US Treasuries well up the yield curve. The real rate is unobservable, but it is related to (but not identical with) the yield on TIPS which are now negative up to 10-year maturities. The fall in real rates partly reflects the cyclical tendency for the expected rate of return on new investment to fall in recessions, but real interest rates were falling even before the downturn started in 2007.

In this post, at any rate, Scott doesn’t explain why the real rate of return on investment is falling. In the General Theory, Keynes speculated about the possibility that after the great industrialization of the 19th and early 20th centuries, new opportunities for investment were becoming exhausted. Alvin Hansen, an early American convert to Keynesianism, developed this idea into what he called the secular-stagnation hypothesis, a hypothesis suggesting that, after World War II, even with very low interest rates, the US economy was likely to relapse into depression. The postwar boom seemed to disprove Hansen’s idea, which became a kind of historical curiosity, if not an embarrassment. I wonder if Scott thinks that Keynes and Hansen were just about a half-century ahead of their time, or does he have some other reason in mind for why he thinks that real interest rates are destined to be very low?

One possibility, which, in a sense, is the optimistic take on our current predicament, is that low real interest rates are the result of bad monetary policy, the obstacle to an economic expansion that, in the usual course of events, would raise real interest rates back to more “normal” levels. There are two problems with this interpretation. First, the decline in real interest rates began in the last decade well before the 2007-09 downturn. Second, why does Scott, evidently accepting Ryan Avent’s pessimistic assessment of the life-expectancy of the current recovery notwithstanding rapidly increasing support for NGDPLT, anticipate a relapse into recession before the recovery raises real interest rates above their current near-zero levels? Whatever the explanation, I look forward to hearing more from Scott about all this.

But in the meantime, here are some thoughts of my own about our low real interest rates.

First, it can’t be emphasized too strongly that low real interest rates are not caused by Fed “intervention” in the market. The Fed can buy up all the Treasuries it wants to, but doing so could not force down interest rates if those low interest rates were inconsistent with expected rates of return on investment and the marginal rate of time preference of households. Despite low real interest rates, consumers are not rushing to borrow money at low rates to increase present consumption, nor are businesses rushing to take advantage of low real interest rates to undertake shiny new investment projects. Current low interest rates are a reflection of the expectations of the public about their opportunities for trade-offs between current and future consumption and between current and future production and their expectations about future price levels and interest rates. It is not the Fed that is punishing savers, as the editorial page of the Wall Street Journal constantly alleges. Rather, it is the distilled wisdom of market participants that is determining how much any individual should be rewarded for the act of abstaining from current consumption. Unfortunately, there is so little demand for resources to be used to increase future output, the act of abstaining from current consumption contributes essentially nothing, at the margin, to the increase of future output, which is why the market is now offering next to no reward for a marginal abstention from current consumption.

Second, interest rates reflect the expectations of businesses and investors about the profitability of investing in new capital, and the expectations of households about their future incomes (largely dependent on expectations about future employment). These expectations – about profitability and about future incomes — are distinct, but they are clearly interdependent. If businesses are optimistic about the profitability of future investment, households are likely to be optimistic about future incomes. If households are pessimistic about future incomes, businesses are unlikely to expect investments in new capital to be profitable. If real interest rates are stuck at zero, it suggests that businesses and households are stuck in a mutually reinforcing cycle of pessimistic expectations — households about future income and employment and businesses about the profitability of investing in new capital. Expectations, as I have said before, are fundamental. Low interest rates and secular stagnation need not be the result of an inevitable drying up of investment opportunities; they may be the result of a vicious cycle of mutually reinforcing pessimism by households and businesses.

The simple Keynesian model — at least the Keynesian-cross version of intro textbooks or even the IS-LM version of intermediate textbooks – generally holds expectations constant. But in fact, it is through the adjustment of expectations that full-employment equilibrium is reached. For fiscal or monetary policy to work, they must alter expectations. Conventional calculations of spending or tax multipliers, which implicitly hold expectations constant, miss the point, which is to alter expectations.

Similarly, as I have tried to suggest in my previous two posts, what Friedman called the natural rate of unemployment may itself depend on expectations. A change in monetary policy may alter expectations in a manner that reduces the natural rate. A straightforward application of the natural-rate model leads some to dismiss a reduction in unemployment associated with a small increase in the rate of inflation as inefficient, because the increase in employment results from workers being misled into accepting jobs that will turn out to pay workers a lower real wage than they had expected. But even if that is so, the increase in employment may still be welfare-increasing, because the employment of each worker improves the chances that another worker will become employed. The social benefit of employment may be greater than the private benefit. In that case, the apparent anomaly (from the standpoint of the natural-rate hypothesis) that measurements of social well-being seem to be greatest when employment is maximized actually make perfectly good sense.

In an upcoming post, I hope to explore some other possible explanations for low real interest rates.

The Lucas Critique Revisited

After writing my previous post, I reread Robert Lucas’s classic article “Econometric Policy Evaluation: A Critique,” surely one of the most influential economics articles of the last half century. While the main point of the article was not entirely original, as Lucas himself acknowledged in the article, so powerful was his explanation of the point that it soon came to be known simply as the Lucas Critique. The Lucas Critique says that if a certain relationship between two economic variables has been estimated econometrically, policy makers, in formulating a policy for the future, cannot rely on that relationship to persist once a policy aiming to exploit the relationship is adopted. The motivation for the Lucas Critique was the Friedman-Phelps argument that a policy of inflation would fail to reduce the unemployment rate in the long run, because workers would eventually adjust their expectations of inflation, thereby draining inflation of any stimulative effect. By restating the Friedman-Phelps argument as the application of a more general principle, Lucas reinforced and solidified the natural-rate hypothesis, thereby establishing a key principle of modern macroeconomics.

In my previous post I argued that microeconomic relationships, e.g., demand curves and marginal rates of substitution, are, as a matter of pure theory, not independent of the state of the macroeconomy. In an interdependent economy all variables are mutually determined, so there is no warrant for saying that microrelationships are logically prior to, or even independent of, macrorelationships. If so, then the idea of microfoundations for macroeconomics is misleading, because all economic relationships are mutually interdependent; some relationships are not more basic or more fundamental than others. The kernel of truth in the idea of microfoundations is that there are certain basic principles or axioms of behavior that we don’t think an economic model should contradict, e.g., arbitrage opportunities should not be left unexploited – people should not pass up obvious opportunities, such as mutually beneficial offers of exchange, to increase their wealth or otherwise improve their state of well-being.

So I was curious to how see whether Lucas, while addressing the issue of how price expectations affected output and employment, recognized the possibility that a microeconomic relationship could be dependent on the state of the macroeconomy. For my purposes, the relevant passage occurs in section 5.3 (subtitled “Phillips Curves”) of the paper. After working out the basic theory earlier in the page, Lucas, in section 5, provided three examples of how econometric estimates of macroeconomic relationships would mislead policy makers if the effect of expectations on those relationships were not taken into account. The first two subsections treated consumption expenditures and the investment tax credit. The passage that I want to focus on consists of the first two paragraphs of subsection 5.3 (which I now quote verbatim except for minor changes in Lucas’s notation).

A third example is suggested by the recent controversy over the Phelps-Friedman hypothesis that permanent changes in the inflation rate will not alter the average rate of unemployment. Most of the major econometric models have been used in simulation experiments to test this proposition; the results are uniformly negative. Since expectations are involved in an essential way in labor and product market supply behavior, one would presumed, on the basis of the considerations raised in section 4, that these tests are beside the point. This presumption is correct, as the following example illustrates.

It will be helpful to utilize a simple, parametric model which captures the main features of the expectational view of aggregate supply – rational agents, cleared markets, incomplete information. We imagine suppliers of goods to be distributed over N distinct markets i, I = 1, . . ., N. To avoid index number problems, suppose that the same (except for location) good is traded in each market, and let y_it be the log of quantity supplied in market i in period t. Assume, further, that the supply y_it is composed of two factors

y_it = Py_it + Cy_it,

where Py_it denotes normal or permanent supply, and Cy_it cyclical or transitory supply (both again in logs). We take Py_it to be unresponsive to all but permanent relative price changes or, since the latter have been defined away by assuming a single good, simply unresponsive to price changes. Transitory supply Cy_it varies with perceived changes in the relative price of goods in i:

Cy_it = β(p_it – Ep_it),

where p_it is the log of the actual price in i at time t, and Ep_it is the log of the general (geometric average) price level in the economy as a whole, as perceived in market i.

Let’s take a moment to ponder the meaning of Lucas’s simplifying assumption that there is just one good. Relative prices (except for spatial differences in an otherwise identical good) are fixed by assumption; a disequilibrium (or suboptimal outcome) can arise only because of misperceptions of the aggregate price level. So, by explicit assumption, Lucas rules out the possibility that any microeconomic relationship depends on macroeconomic conditions. Note also that Lucas does not provide an account of the process by which market prices are established at each location, nothing being said about demand conditions. For example, if suppliers at location i perceive a price (transitorily) above the equilibrium price, and respond by (mistakenly) increasing output, thereby increasing their earnings, do those suppliers increase their demand to consume output? Suppose suppliers live and purchase at locations other than where they are supplying product, so that a supplier at location i purchases at location j, where i does not equal j. If a supplier at location i perceives an increase in price at location i, will his demand to purchase the good at location j increase as well? Will the increase in demand at location j cause an increase in the price at location j? What if there is a one-period lag between supplier receipts and their consumption demands? Lucas provides no insight into these possible ambiguities in his model.

Stated more generally, the problem with Lucas’s example is that it seems to be designed to exclude a priori the possibility of every type of disequilibrium but one, a disequilibrium corresponding to a single type of informational imperfection. Reasoning on the basis of that narrow premise, Lucas shows that, under a given expectation of the future price level, an econometrician would find a positive correlation between the price level and output — a negatively sloped Phillips Curve. Yet, under the same assumptions, Lucas also shows that an anticipated policy to raise the rate of inflation would fail to raise output (or, by implication, increase employment). But, given his very narrow underlying assumptions, it seems plausible to doubt the robustness of Lucas’s conclusion. Proving the validity of a proposition requires more than constructing an example in which the proposition is shown to be valid. That would be like trying to prove that the sides of every triangle are equal in length by constructing a triangle whose angles are all equal to 60 degrees, and then claiming that, because the sides of that triangle are equal in length, the sides of all triangles are equal in length.

Perhaps a better model than the one Lucas posited would have been one in which the amount supplied in each market was positively correlated with the amount supplied in every other market, inasmuch as an increase (decrease) in the amount supplied in one market will tend to increase (decrease) demand in other markets. In that case, I conjecture, deviations from permanent supply would tend to be cumulative (though not necessarily permanent), implying a more complex propagation mechanism than Lucas’s simple model does. Nor is it obvious to me how the equilibrium of such a model would compare to the equilibrium in the Lucas model. It does not seem inconceivable that a model could be constructed in which equilibrium output depended on the average price level. But this is just conjecture on my part, because I haven’t tried to write out and solve such a model. Perhaps an interested reader out there will try to work it out and report back to us on the results.

PS:  Congratulations to Scott Sumner on his excellent op-ed on nominal GDP level targeting in today’s Financial Times.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,272 other subscribers
Follow Uneasy Money on WordPress.com