Archive for the 'Stephen Williamson' Category

Price Stickiness and Macroeconomics

Noah Smith has a classically snide rejoinder to Stephen Williamson’s outrage at Noah’s Bloomberg paean to price stickiness and to the classic Ball and Maniw article on the subject, an article that provoked an embarrassingly outraged response from Robert Lucas when published over 20 years ago. I don’t know if Lucas ever got over it, but evidently Williamson hasn’t.

Now to be fair, Lucas’s outrage, though misplaced, was understandable, at least if one understands that Lucas was so offended by the ironic tone in which Ball and Mankiw cast themselves as defenders of traditional macroeconomics – including both Keynesians and Monetarists – against the onslaught of “heretics” like Lucas, Sargent, Kydland and Prescott that he just stopped reading after the first few pages and then, in a fit of righteous indignation, wrote a diatribe attacking Ball and Mankiw as religious fanatics trying to halt the progress of science as if that was the real message of the paper – not, to say the least, a very sophisticated reading of what Ball and Mankiw wrote.

While I am not hostile to the idea of price stickiness — one of the most popular posts I have written being an attempt to provide a rationale for the stylized (though controversial) fact that wages are stickier than other input, and most output, prices — it does seem to me that there is something ad hoc and superficial about the idea of price stickiness and about many explanations, including those offered by Ball and Mankiw, for price stickiness. I think that the negative reactions that price stickiness elicits from a lot of economists — and not only from Lucas and Williamson — reflect a feeling that price stickiness is not well grounded in any economic theory.

Let me offer a slightly different criticism of price stickiness as a feature of macroeconomic models, which is simply that although price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

The usual argument is that if prices are free to adjust in response to market forces, they will adjust to balance supply and demand, and an equilibrium will be restored by the automatic adjustment of prices. That is what students are taught in Econ 1. And it is an important lesson, but it is also a “partial” lesson. It is partial, because it applies to a single market that is out of equilibrium. The implicit assumption in that exercise is that nothing else is changing, which means that all other markets — well, not quite all other markets, but I will ignore that nuance – are in equilibrium. That’s what I mean when I say (as I have done before) that just as macroeconomics needs microfoundations, microeconomics needs macrofoundations.

Now it’s pretty easy to show that in a single market with an upward-sloping supply curve and a downward-sloping demand curve, that a price-adjustment rule that raises price when there’s an excess demand and reduces price when there’s an excess supply will lead to an equilibrium market price. But that simple price-adjustment rule is hard to generalize when many markets — not just one — are in disequilibrium, because reducing disequilibrium in one market may actually exacerbate disequilibrium, or create a disequilibrium that wasn’t there before, in another market. Thus, even if there is an equilibrium price vector out there, which, if it were announced to all economic agents, would sustain a general equilibrium in all markets, there is no guarantee that following the standard price-adjustment rule of raising price in markets with an excess demand and reducing price in markets with an excess supply will ultimately lead to the equilibrium price vector. Even more disturbing, the standard price-adjustment rule may not, even under a tatonnement process in which no trading is allowed at disequilibrium prices, lead to the discovery of the equilibrium price vector. Of course, in the real world trading occurs routinely at disequilibrium prices, so that the “mechanical” forces tending an economy toward equilibrium are even weaker than the standard analysis of price-adjustment would suggest.

This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists. Franklin Fisher made this point 30 years ago in an important, but insufficiently appreciated, volume Disequilibrium Foundations of Equilibrium Economics. But the idea goes back even further: to Hayek’s important work on intertemporal equilibrium, especially his classic paper “Economics and Knowledge,” formalized by Hicks in the temporary-equilibrium model described in Value and Capital.

The key point made by Hayek in this context is that there can be an intertemporal equilibrium if and only if all agents formulate their individual plans on the basis of the same expectations of future prices. If their expectations for future prices are not the same, then any plans based on incorrect price expectations will have to be revised, or abandoned altogether, as price expectations are disappointed over time. For price adjustment to lead an economy back to equilibrium, the price adjustment must converge on an equilibrium price vector and on correct price expectations. But, as Hayek understood in 1937, and as Fisher explained in a dense treatise 30 years ago, we have no economic theory that explains how such a price vector, even if it exists, is arrived at, and even under a tannonement process, much less under decentralized price setting. Pinning the blame on this vague thing called price stickiness doesn’t address the deeper underlying theoretical issue.

Of course for Lucas et al. to scoff at price stickiness on these grounds is a bit rich, because Lucas and his followers seem entirely comfortable with assuming that the equilibrium price vector is rationally expected. Indeed, rational expectation of the equilibrium price vector is held up by Lucas as precisely the microfoundation that transformed the unruly field of macroeconomics into a real science.

Monetary Theory on the Neo-Fisherite Edge

The week before last, Noah Smith wrote a post “The Neo-Fisherite Rebellion” discussing, rather sympathetically I thought, the contrarian school of monetary thought emerging from the Great American Heartland, according to which, notwithstanding everything monetary economists since Henry Thornton have taught, high interest rates are inflationary and low interest rates deflationary. This view of the relationship between interest rates and inflation was advanced (but later retracted) by Narayana Kocherlakota, President of the Minneapolis Fed in a 2010 lecture, and was embraced and expounded with increased steadfastness by Stephen Williamson of Washington University in St. Louis and the St. Louis Fed in at least one working paper and in a series of posts over the past five or six months (e.g. here, here and here). And John Cochrane of the University of Chicago has picked up on the idea as well in two recent blog posts (here and here). Others seem to be joining the upstart school as well.

The new argument seems simple: given the Fisher equation, in which the nominal interest rate equals the real interest rate plus the (expected) rate of inflation, a central bank can meet its inflation target by setting a fixed nominal interest rate target consistent with its inflation target and keeping it there. Once the central bank sets its target, the long-run neutrality of money, implying that the real interest rate is independent of the nominal targets set by the central bank, ensures that inflation expectations must converge on rates consistent with the nominal interest rate target and the independently determined real interest rate (i.e., the real yield curve), so that the actual and expected rates of inflation adjust to ensure that the Fisher equation is satisfied. If the promise of the central bank to maintain a particular nominal rate over time is believed, the promise will induce a rate of inflation consistent with the nominal interest-rate target and the exogenous real rate.

The novelty of this way of thinking about monetary policy is that monetary theorists have generally assumed that the actual adjustment of the price level or inflation rate depends on whether the target interest rate is greater or less than the real rate plus the expected rate. When the target rate is greater than the real rate plus expected inflation, inflation goes down, and when it is less than the real rate plus expected inflation, inflation goes up. In the conventional treatment, the expected rate of inflation is momentarily fixed, and the (expected) real rate variable. In the Neo-Fisherite school, the (expected) real rate is fixed, and the expected inflation rate is variable. (Just as an aside, I would observe that the idea that expectations about the real rate of interest and the inflation rate cannot occur simultaneously in the short run is not derived from the limited cognitive capacity of economic agents; it can only be derived from the limited intellectual capacity of economic theorists.)

The heretical views expressed by Williamson and Cochrane and earlier by Kocherlakota have understandably elicited scorn and derision from conventional monetary theorists, whether Keynesian, New Keynesian, Monetarist or Market Monetarist. (Williamson having appropriated for himself the New Monetarist label, I regrettably could not preserve an appropriate symmetry in my list of labels for monetary theorists.) As a matter of fact, I wrote a post last December challenging Williamson’s reasoning in arguing that QE had caused a decline in inflation, though in his initial foray into uncharted territory, Williamson was actually making a narrower argument than the more general thesis that he has more recently expounded.

Although deep down, I have no great sympathy for Williamson’s argument, the counterarguments I have seen leave me feeling a bit, shall we say, underwhelmed. That’s not to say that I am becoming a convert to New Monetarism, but I am feeling that we have reached a point at which certain underlying gaps in monetary theory can’t be concealed any longer. To explain what I mean by that remark, let me start by reviewing the historical context in which the ruling doctrine governing central-bank operations via adjustments in the central-bank lending rate evolved. The primary (though historically not the first) source of the doctrine is Henry Thornton in his classic volume The Nature and Effects of the Paper Credit of Great Britain.

Even though Thornton focused on the policy of the Bank of England during the Napoleonic Wars, when Bank of England notes, not gold, were legal tender, his discussion was still in the context of a monetary system in which paper money was generally convertible into either gold or silver. Inconvertible banknotes – aka fiat money — were the exception not the rule. Gold and silver were what Nick Rowe would call alpha money. All other moneys were evaluated in terms of gold and silver, not in terms of a general price level (not yet a widely accepted concept). Even though Bank of England notes became an alternative alpha money during the restriction period of inconvertibility, that situation was generally viewed as temporary, the restoration of convertibility being expected after the war. The value of the paper pound was tracked by the sterling price of gold on the Hamburg exchange. Thus, Ricardo’s first published work was entitled The High Price of Bullion, in which he blamed the high sterling price of bullion at Hamburg on an overissue of banknotes by the Bank of England.

But to get back to Thornton, who was far more concerned with the mechanics of monetary policy than Ricardo, his great contribution was to show that the Bank of England could control the amount of lending (and money creation) by adjusting the interest rate charged to borrowers. If banknotes were depreciating relative to gold, the Bank of England could increase the value of their notes by raising the rate of interest charged on loans.

The point is that if you are a central banker and are trying to target the exchange rate of your currency with respect to an alpha currency, you can do so by adjusting the interest rate that you charge borrowers. Raising the interest rate will cause the exchange value of your currency to rise and reducing the interest rate will cause the exchange value to fall. And if you are operating under strict convertibility, so that you are committed to keep the exchange rate between your currency and an alpha currency at a specified par value, raising that interest rate will cause you to accumulate reserves payable in terms of the alpha currency, and reducing that interest rate will cause you to emit reserves payable in terms of the alpha currency.

So the idea that an increase in the central-bank interest rate tends to increase the exchange value of its currency, or, under a fixed-exchange rate regime, an increase in the foreign exchange reserves of the bank, has a history at least two centuries old, though the doctrine has not exactly been free of misunderstanding or confusion in the course of those two centuries. One of those misunderstandings was about the effect of a change in the central-bank interest rate, under a fixed-exchange rate regime. In fact, as long as the central bank is maintaining a fixed exchange rate between its currency and an alpha currency, changes in the central-bank interest rate don’t affect (at least as a first approximation) either the domestic money supply or the domestic price level; all that changes in the central-bank interest rate can accomplish is to change the bank’s holdings of alpha-currency reserves.

It seems to me that this long well-documented historical association between changes in the central-bank interest rates and the exchange value of currencies and the level of private spending is the basis for the widespread theoretical presumption that raising the central-bank interest rate target is deflationary and reducing it is inflationary. However, the old central-bank doctrine of the Bank Rate was conceived in a world in which gold and silver were the alpha moneys, and central banks – even central banks operating with inconvertible currencies – were beta banks, because the value of a central-bank currency was still reckoned, like the value of inconvertible Bank of England notes in the Napoleonic Wars, in terms of gold and silver.

In the Neo-Fisherite world, central banks rarely peg exchange rates against each other, and there is no longer any outside standard of value to which central banks even nominally commit themselves. In a world without the metallic standard of value in which the conventional theory of central banking developed, do the propositions about the effects of central-bank interest-rate setting still obtain? I am not so sure that they do, not with the analytical tools that we normally deploy when thinking about the effects of central-bank policies. Why not? Because, in a Neo-Fisherite world in which all central banks are alpha banks, I am not so sure that we really know what determines the value of this thing called fiat money. And if we don’t really know what determines the value of a fiat money, how can we really be sure that interest-rate policy works the same way in a Neo-Fisherite world that it used to work when the value of money was determined in relation to a metallic standard? (Just to avoid misunderstanding, I am not – repeat NOT — arguing for restoring the gold standard.)

Why do I say that we don’t know what determines the value of fiat money in a Neo-Fisherite world? Well, consider this. Almost three weeks ago I wrote a post in which I suggested that Bitcoins could be a massive bubble. My explanation for why Bitcoins could be a bubble is that they provide no real (i.e., non-monetary) service, so that their value is totally contingent on, and derived from (or so it seems to me, though I admit that my understanding of Bitcoins is partial and imperfect), the expectation of a positive future resale value. However, it seems certain that the resale value of Bitcoins must eventually fall to zero, so that backward induction implies that Bitcoins, inasmuch as they provide no real service, cannot retain a positive value in the present. On this reasoning, any observed value of a Bitcoin seems inexplicable except as an irrational bubble phenomenon.

Most of the comments I received about that post challenged the relevance of the backward-induction argument. The challenges were mainly of two types: a) the end state, when everyone will certainly stop accepting a Bitcoin in exchange, is very, very far into the future and its date is unknown, and b) the backward-induction argument applies equally to every fiat currency, so my own reasoning, according to my critics, implies that the value of every fiat currency is just as much a bubble phenomenon as the value of a Bitcoin.

My response to the first objection is that even if the strict logic of the backward-induction argument is inconclusive, because of the long and uncertain duration of the time elapse between now and the end state, the argument nevertheless suggests that the value of a Bitcoin is potentially very unsteady and vulnerable to sudden collapse. Those are not generally thought to be desirable attributes in a medium of exchange.

My response to the second objection is that fiat currencies are actually quite different from Bitcoins, because fiat currencies are accepted by governments in discharging the tax liabilities due to them. The discharge of a tax liability is a real (i.e. non-monetary) service, creating a distinct non-monetary demand for fiat currencies, thereby ensuring that fiat currencies retain value, even apart from being accepted as a medium of exchange.

That, at any rate, is my view, which I first heard from Earl Thompson (see his unpublished paper, “A Reformulation of Macroeconomic Theory” pp. 23-25 for a derivation of the value of fiat money when tax liability is a fixed proportion of income). Some other pretty good economists have also held that view, like Abba Lerner, P. H. Wicksteed, and Adam Smith. Georg Friedrich Knapp also held that view, and, in his day, he was certainly well known, but I am unable to pass judgment on whether he was or wasn’t a good economist. But I do know that his views about money were famously misrepresented and caricatured by Ludwig von Mises. However, there are other good economists (Hal Varian for one), apparently unaware of, or untroubled by, the backward induction argument, who don’t think that acceptability in discharging tax liability is required to explain the value of fiat money.

Nor do I think that Thompson’s tax-acceptability theory of the value of money can stand entirely on its own, because it implies a kind of saw-tooth time profile of the price level, so that a fiat currency, earning no liquidity premium, would actually be appreciating between peak tax collection dates, and depreciating immediately following those dates, a pattern not obviously consistent with observed price data, though I do recall that Thompson used to claim that there is a lot of evidence that prices fall just before peak tax-collection dates. I don’t think that anyone has ever tried to combine the tax-acceptability theory with the empirical premise that currency (or base money) does in fact provide significant liquidity services. That, it seems to me, would be a worthwhile endeavor for any eager young researcher to undertake.

What does all of this have to do with the Neo-Fisherite Rebellion? Well, if we don’t have a satisfactory theory of the value of fiat money at hand, which is what another very smart economist Fischer Black – who, to my knowledge never mentioned the tax-liability theory — thought, then the only explanation of the value of fiat money is that, like the value of a Bitcoin, it is whatever people expect it to be. And the rate of inflation is equally inexplicable, being just whatever it is expected to be. So in a Neo-Fisherite world, if the central bank announces that it is reducing its interest-rate target, the effect of the announcement depends entirely on what “the market” reads into the announcement. And that is exactly what Fischer Black believed. See his paper “Active and Passive Monetary Policy in a Neoclassical Model.”

I don’t say that Williamson and his Neo-Fisherite colleagues are correct. Nor have they, to my knowledge, related their arguments to Fischer Black’s work. What I do say (indeed this is a problem I raised almost three years ago in one of my first posts on this blog) is that existing monetary theories of the price level are unable to rule out his result, because the behavior of the price level and inflation seems to depend, more than anything else, on expectations. And it is far from clear to me that there are any fundamentals in which these expectations can be grounded. If you impose the rational expectations assumption, which is almost certainly wrong empirically, maybe you can argue that the central bank provides a focal point for expectations to converge on. The problem, of course, is that in the real world, expectations are all over the place, there being no fundamentals to force the convergence of expectations to a stable equilibrium value.

In other words, it’s just a mess, a bloody mess, and I do not like it, not one little bit.

Stephen Williamson Defends the FOMC

Publication of the transcripts of the FOMC meetings in 2008 has triggered a wave of criticism of the FOMC for the decisions it took in 2008. Since the transcripts were released I have written two posts (here and here) charging that the inflation-phobia of the FOMC was a key (though not the sole) cause of the financial crisis in September 2008. Many other bloggers, Matt Yglesias, Scott Sumner, Brad Delong and Paul Krugman, just to name a few, were also sharply critical of the FOMC, though Paul Krugman at any rate seemed to think that the Fed’s inflation obsession was merely weird rather than catastrophic.

Stephen Williamson, however, has a different take on all this. In a post last week, just after the release of the transcripts, Williamson chastised Matt Yglesias for chastising Ben Bernanke and the FOMC for not reducing the Federal Funds target at the September 16 FOMC meeting, the day after Lehman went into bankruptcy. Williamson quotes this passage from Yglesias’s post.

New documents released last week by the Federal Reserve shed important new light on one of the most consequential and underdiscussed moments of recent American history: the decision to hold interest rates flat on Sept. 16, 2008. At the time, the meeting at which the decision was made was overshadowed by the ongoing presidential campaign and Lehman Brothers’ bankruptcy filing the previous day. Political reporters were focused on the campaign, economic reporters on Lehman, and since the news from the Fed was that nothing was changing, it didn’t make for much of a story. But in retrospect, it looks to have been a major policy blunder—one that was harmful on its own terms and that set a precedent for a series of later disasters.

To which Williamson responds acidly:

So, it’s like there was a fire at City Hall, and five years later a reporter for the local rag is complaining that the floor wasn’t swept while the fire was in progress.

Now, in a way, I agree with Williamson’s point here; I think it’s a mistake to overemphasize the September 16 meeting. By September 16, the damage had been done. The significance of the decision not to cut the Fed Funds target is not that the Fed might have prevented a panic that was already developing (though I don’t rule out the possibility that a strong enough statement by the FOMC might have provided enough reassurance to the markets to keep the crisis from spiraling out of control), but what the decision tells us about the mindset of the FOMC. Just read the statement that the Fed issued after its meeting.

The Federal Open Market Committee decided today to keep its target for the federal funds rate at 2 percent.

Strains in financial markets have increased significantly and labor markets have weakened further. Economic growth appears to have slowed recently, partly reflecting a softening of household spending. Tight credit conditions, the ongoing housing contraction, and some slowing in export growth are likely to weigh on economic growth over the next few quarters. Over time, the substantial easing of monetary policy, combined with ongoing measures to foster market liquidity, should help to promote moderate economic growth.

Inflation has been high, spurred by the earlier increases in the prices of energy and some other commodities. The Committee expects inflation to moderate later this year and next year, but the inflation outlook remains highly uncertain.

The downside risks to growth and the upside risks to inflation are both of significant concern to the Committee. The Committee will monitor economic and financial developments carefully and will act as needed to promote sustainable economic growth and price stability.

What planet were they living on? “The downside risks to growth and the upside risks to inflation are both of significant concern to the Committee.” OMG!

Williamson, however, sees it differently.

[T]he FOMC agreed to keep the fed funds rate target constant at 2%. Seems like this was pretty dim-witted of the committee, given what was going on in financial markets that very day, right? Wrong. At that point, the fed funds market target rate had become completely irrelevant.

Williamson goes on to point out that although the FOMC did not change the Fed Funds target, borrowings from the Fed increased sharply in September, so that the Fed was effectively easing its policy even though the target – a meaningless target in Williamson’s view – had not changed.

Thus, by September 16, 2008, it seems the Fed was effectively already at the zero lower bound. At that time the fed funds target was irrelevant, as there were excess reserves in the system, and the effective fed funds rate was irrelevant, as it reflected risk.

I want to make two comments on Williamson’s argument. First, the argument is certainly at odds with Bernanke’s own statement in the transcript, towards the end of the September 16 meeting, giving his own recommendation about what policy action the FOMC should take:

Overall I believe that our current funds rate setting is appropriate, and I don’t really see any reason to change…. Cutting rates would be a very big step that would send a very strong signal about our views on the economy and about our intentions going forward, and I think we should view that step as a very discrete thing rather than as a 25 basis point kind of thing. We should be very certain about that change before we undertake it because I would be concerned, for example, about the implications for the dollar, commodity prices, and the like.

So Bernanke clearly states that his view is that the current fed funds target was “appropriate.” He did not say that the fed funds rate is at the lower bound. Instead, he explains why he does not want to cut the fed funds rate, implying that he believed that cutting the rate was an option. He didn’t want to exercise that option, because he did not like the “very strong signal about our views on the economy and about our intentions going forward” that a rate cut would send. Indeed, he intimates that a rate cut of 25 basis points would be meaningless under the circumstances, suggesting an awareness, however vague, that a crisis was brewing, so that a cut in the target rate would have to be substantial to calm, rather than scare, the markets. (The next cut, three weeks later, was 50 basis points, and things only got worse.)

Second, suppose for argument’s sake, that Williamson is right and Bernanke (and almost everyone else) was wrong, that the fed funds target was meaningless. Does that mean that the Fed’s inflation obsession in 2008 is just an optical illusion with no significance — that the Fed was powerless to have done anything that would have increased expenditure and income, thereby avoiding or alleviating the crisis?

I don’t think so, and the reason is that, as I pointed out in my previous post, the dollar began appreciating rapidly in forex markets in mid-July 2009, the dollar euro exchange rate appreciating by about 12% and the trade-weighted value of the dollar appreciating by about 10% between mid-July and the week before the Lehman collapse. An appreciating that rapid was a clear sign that there was a shortage of dollar liquidity which was causing spending to drop all through the economy, as later confirmed by the sharp drop in third-quarter GDP. The dollar fell briefly in the days just before and after the Lehman collapse, then resuming its sharp ascent as the financial crisis worsened in September and October, appreciating by another 10-15%.

So even if the fed funds target was ineffectual, the Fed, along with the Treasury, still had it within their power to intervene in forex markets, selling dollars for euros and other currencies, thereby preventing the dollar from rising further in value. Unfortunately, as is clear from the transcripts, the FOMC thought that the rising dollar was a favorable development that would reduce the inflation about which it was so obsessively concerned. So the FOMC happily watched the dollar rise by 25% against other currencies between July and November as the economy tanked, because, as the September 16 statement of the FOMC so eloquently put it, “upside risks to inflation are . . . of significant concern to the Committee.” The FOMC gave us the monetary policy it wanted us to have.

Macroeconomic Science and Meaningful Theorems

Greg Hill has a terrific post on his blog, providing the coup de grace to Stephen Williamson’s attempt to show that the way to increase inflation is for the Fed to raise its Federal Funds rate target. Williamson’s problem, Hill points out is that he attempts to derive his results from relationships that exist in equilibrium. But equilibrium relationships in and of themselves are sterile. What we care about is how a system responds to some change that disturbs a pre-existing equilibrium.

Williamson acknowledged that “the stories about convergence to competitive equilibrium – the Walrasian auctioneer, learning – are indeed just stories . . . [they] come from outside the model” (here).  And, finally, this: “Telling stories outside of the model we have written down opens up the possibility for cheating. If everything is up front – written down in terms of explicit mathematics – then we have to be honest. We’re not doing critical theory here – we’re doing economics, and we want to be treated seriously by other scientists.”

This self-conscious scientism on Williamson’s part is not just annoyingly self-congratulatory. “Hey, look at me! I can write down mathematical models, so I’m a scientist, just like Richard Feynman.” It’s wildly inaccurate, because the mere statement of equilibrium conditions is theoretically vacuous. Back to Greg:

The most disconcerting thing about Professor Williamson’s justification of “scientific economics” isn’t its uncritical “scientism,” nor is it his defense of mathematical modeling. On the contrary, the most troubling thing is Williamson’s acknowledgement-cum-proclamation that his models, like many others, assume that markets are always in equilibrium.

Why is this assumption a problem?  Because, as Arrow, Debreu, and others demonstrated a half-century ago, the conditions required for general equilibrium are unimaginably stringent.  And no one who’s not already ensconced within Williamson’s camp is likely to characterize real-world economies as always being in equilibrium or quickly converging upon it.  Thus, when Williamson responds to a question about this point with, “Much of economics is competitive equilibrium, so if this is a problem for me, it’s a problem for most of the profession,” I’m inclined to reply, “Yes, Professor, that’s precisely the point!”

Greg proceeds to explain that the Walrasian general equilibrium model involves the critical assumption (implemented by the convenient fiction of an auctioneer who announces prices and computes supply and demand at that prices before allowing trade to take place) that no trading takes place except at the equilibrium price vector (where the number of elements in the vector equals the number of prices in the economy). Without an auctioneer there is no way to ensure that the equilibrium price vector, even if it exists, will ever be found.

Franklin Fisher has shown that decisions made out of equilibrium will only converge to equilibrium under highly restrictive conditions (in particular, “no favorable surprises,” i.e., all “sudden changes in expectations are disappointing”).  And since Fisher has, in fact, written down “the explicit mathematics” leading to this conclusion, mustn’t we conclude that the economists who assume that markets are always in equilibrium are really the ones who are “cheating”?

An alternative general equilibrium story is that learning takes place allowing the economy to converge on a general equilibrium time path over time, but Greg easily disposes of that story as well.

[T]he learning narrative also harbors massive problems, which come out clearly when viewed against the background of the Arrow-Debreu idealized general equilibrium construction, which includes a complete set of intertemporal markets in contingent claims.  In the world of Arrow-Debreu, every price in every possible state of nature is known at the moment when everyone’s once-and-for-all commitments are made.  Nature then unfolds – her succession of states is revealed – and resources are exchanged in accordance with the (contractual) commitments undertaken “at the beginning.”

In real-world economies, these intertemporal markets are woefully incomplete, so there’s trading at every date, and a “sequence economy” takes the place of Arrow and Debreu’s timeless general equilibrium.  In a sequence economy, buyers and sellers must act on their expectations of future events and the prices that will prevail in light of these outcomes.  In the limiting case of rational expectations, all agents correctly forecast the equilibrium prices associated with every possible state of nature, and no one’s expectations are disappointed. 

Unfortunately, the notion that rational expectations about future prices can replace the complete menu of Arrow-Debreu prices is hard to swallow.  Frank Hahn, who co-authored “General Competitive Analysis” with Kenneth Arrow (1972), could not begin to swallow it, and, in his disgorgement, proceeded to describe in excruciating detail why the assumption of rational expectations isn’t up to the job (here).  And incomplete markets are, of course, but one departure from Arrow-Debreu.  In fact, there are so many more that Hahn came to ridicule the approach of sweeping them all aside, and “simply supposing the economy to be in equilibrium at every moment of time.”

Just to pile on, I would also point out that any general equilibrium model assumes that there is a given state of knowledge that is available to all traders collectively, but not necessarily to each trader. In this context, learning means that traders gradually learn what the pre-existing facts are. But in the real world, knowledge increases and evolves through time. As knowledge changes, capital — both human and physical — embodying that knowledge becomes obsolete and has to be replaced or upgraded, at unpredictable moments of time, because it is the nature of new knowledge that it cannot be predicted. The concept of learning incorporated in these sorts of general equilibrium constructs is a travesty of the kind of learning that characterizes the growth of knowledge in the real world. The implications for the existence of a general equilibrium model in a world in which knowledge grows in an unpredictable way are devastating.

Greg aptly sums up the absurdity of using general equilibrium theory (the description of a decentralized economy in which the component parts are in a state of perfect coordination) as the microfoundation for macroeconomics (the study of decentralized economies that are less than perfectly coordinated) as follows:

What’s the use of “general competitive equilibrium” if it can’t furnish a sturdy, albeit “external,” foundation for the kind of modeling done by Professor Williamson, et al?  Well, there are lots of other uses, but in the context of this discussion, perhaps the most important insight to be gleaned is this: Every aspect of a real economy that Keynes thought important is missing from Arrow and Debreu’s marvelous construction.  Perhaps this is why Axel Leijonhufvud, in reviewing a state-of-the-art New Keynesian DSGE model here, wrote, “It makes me feel transported into a Wonderland of long ago – to a time before macroeconomics was invented.”

To which I would just add that nearly 70 years ago, Paul Samuelson published his magnificent Foundations of Economic Analysis, a work undoubtedly read and mastered by Williamson. But the central contribution of the Foundations was the distinction between equilibrium conditions and what Samuelson (owing to the influence of the still fashionable philosophical school called logical positivism) mislabeled meaningful theorems. A mere equilibrium condition is not the same as a meaningful theorem, but Samuelson showed how a meaningful theorem can be mathematically derived from an equilibrium condition. The link between equilibrium conditions and meaningful theorems was the foundation of economic analysis. Without a mathematical connection between equilibrium conditions and meaningful theorems analogous to the one provided by Samuelson in the Foundations, claims to have provided microfoundations for macroeconomics are, at best, premature.

Does Macroeconomics Need Financial Foundations?

One of the little instances of collateral damage occasioned by the hue and cry following upon Stephen Williamson’s post arguing that quantitative easing has been deflationary was the dustup between Scott Sumner and financial journalist and blogger Izabella Kaminska. I am not going to comment on the specifics of their exchange except to say that the misunderstanding and hard feelings between them seem to have been resolved more or less amicably. However, in quickly skimming the exchange between them, I was rather struck by the condescending tone of Kaminska’s (perhaps understandable coming from the aggrieved party) comment about the lack of comprehension by Scott and Market Monetarists more generally of the basics of finance.

First I’d just like to say I feel much of the misunderstanding comes from the fact that market monetarists tend to ignore the influence of shadow banking and market plumbing in the monetary world. I also think (especially from my conversation with Lars Christensen) that they ignore technological disruption, and the influence this has on wealth distribution and purchasing decisions amongst the wealthy, banks and corporates. Also, as I outlined in the post, my view is slightly different to Williamson’s, it’s based mostly on the scarcity of safe assets and how this can magnify hoarding instincts and fragment store-of-value markets, in a Gresham’s law kind of way. Expectations obviously factor into it, and I think Williamson is absolutely right on that front. But personally I don’t think it’s anything to do with temporary or permanent money expansion expectations. IMO It’s much more about risk expectations, which can — if momentum builds — shift very very quickly, making something deflationary, inflationary very quickly. Though, that doesn’t mean I am worried about inflation (largely because I suspect we may have reached an important productivity inflection point).

This remark was followed up with several comments blasting Market Monetarists for their ignorance of the basics of finance and commending Kaminska for the depth of her understanding to which Kaminska warmly responded adding a few additional jibes at Sumner and Market Monetarists. Here is one.

Market monetarists are getting testy because now that everybody started scrutinizing QE they will be exposed as ignorant. The mechanisms they originally advocated QE would work through will be seen as hopelessly naive. For them the money is like glass beads squirting out of the Federal Reserve, you start talking about stuff like collateral, liquid assets, balance sheets and shadow banking and they are out of their depth.

For laughs: Sumner once tried to defend the childish textbook model of banks lending out reserves and it ended in a colossal embarrassment in the comments section http://www.themoneyillusion.com/?p=5893

For you to defend your credentials in front of such “experts” is absurd. There is a lot more depth to your understanding than to their sandbox vision of the monetary system. And yes, it *is* crazy that journalists and bloggers can talk about these things with more sense than academics. But this [is] the world we live in.

To which Kaminska graciously replied:

Thanks as well! And I tend to agree with your assessment of the market monetarist view of the world.

So what is the Market Monetarist view of the world of which Kaminska tends to have such a low opinion? Well, from reading Kaminska’s comments and those of her commenters, it seems to be that Market Monetarists have an insufficiently detailed and inaccurate view of financial intermediaries, especially of banks and shadow banks, and that Market Monetarists don’t properly understand the role of safe assets and collateral in the economy. But the question is why, and how, does any of this matter to a useful description of how the economy works?

Well, this whole episode started when Stephen Williamson had a blog post arguing that QE was deflationary, and the reason it’s deflationary is that creating more high powered money provides the economy with more safe assets and thereby reduces the liquidity premium associated with safe assets like short-term Treasuries and cash. By reducing the liquidity premium, QE causes the real interest rate to fall, which implies a lower rate of inflation.

Kaminska thinks that this argument, which Market Monetarists find hard to digest, makes sense, though she can’t quite bring herself to endorse it either. But she finds the emphasis on collateral and safety and market plumbing very much to her taste. In my previous post, I raised what I thought were some problems with Williamson’s argument.

First, what is the actual evidence that there is a substantial liquidity premium on short-term Treasuries? If I compare the rates on short-term Treasuries with the rates on commercial paper issued by non-Financial institutions, I don’t find much difference. If there is a substantial unmet demand for good collateral, and there is only a small difference in yield between commercial paper and short-term Treasuries, one would think that non-financial firms could make a killing by issuing a lot more commercial paper. When I wrote the post, I was wondering whether I, a financial novice, might be misreading the data or mismeasuring the liquidity premium on short-term Treasuries. So far, no one has said anything about that, but If I am wrong, I am happy to be enlightened.

Here’s something else I don’t get. What’s so special about so-called safe assets? Suppose, as Williamson claims, that there’s a shortage of safe assets. Why does that imply a liquidity premium? One could still compensate for the lack of safety by over-collateralizing the loan using an inferior asset. If that is a possibility, why is the size of the liquidity premium not constrained?

I also pointed out in my previous post that a declining liquidity premium would be associated with a shift out of money and into real assets, which would cause an increase in asset prices. An increase in asset prices would tend to be associated with an increase in the value of the underlying service flows embodied in the assets, in other words in an increase in current prices, so that, if Williamson is right, QE should have caused measured inflation to rise even as it caused inflation expectations to fall. Of course Williamson believes that the decrease in liquidity premium is associated with a decline in real interest rates, but it is not clear that a decline in real interest rates has any implications for the current price level. So Williamson’s claim that his model explains the decline in observed inflation since QE was instituted does not seem all that compelling.

Now, as one who has written a bit about banking and shadow banking, and as one who shares the low opinion of the above-mentioned commenter on Kaminska’s blog about the textbook model (which Sumner does not defend, by the way) of the money supply via a “money multiplier,” I am in favor of changing how the money supply is incorporated into macromodels. Nevertheless, it is far from clear that changing the way that the money supply is modeled would significantly change any important policy implications of Market Monetarism. Perhaps it would, but if so, that is a proposition to be proved (or at least argued), not a self-evident truth to be asserted.

I don’t say that finance and banking are not important. Current spreads between borrowing and lending rates, may not provide a sufficient margin for banks to provide the intermediation services that they once provided to a wide range of customers. Businesses have a wider range of options in obtaining financing than they used to, so instead of holding bank accounts with banks and foregoing interest on deposits to be able to have a credit line with their banker, they park their money with a money market fund and obtain financing by issuing commercial paper. This works well for firms large enough to have direct access to lenders, but smaller businesses can’t borrow directly from the market and can only borrow from banks at much higher rates or by absorbing higher costs on their bank accounts than they would bear on a money market fund.

At any rate, when market interest rates are low, and when perceived credit risks are high, there is very little margin for banks to earn a profit from intermediation. If so, the money multiplier — a crude measure of how much intermediation banks are engaging in goes down — it is up to the monetary authority to provide the public with the liquidity they demand by increasing the amount of bank reserves available to the banking system. Otherwise, total spending would contract sharply as the public tried to build up their cash balances by reducing their own spending – not a pretty picture.

So finance is certainly important, and I really ought to know more about market plumbing and counterparty risk  and all that than I do, but the most important thing to know about finance is that the financial system tends to break down when the jointly held expectations of borrowers and lenders that the loans that they agreed to would be repaid on schedule by the borrowers are disappointed. There are all kinds of reasons why, in a given case, those jointly held expectations might be disappointed. But financial crises are associated with a very large cluster of disappointed expectations, and try as they might, the finance guys have not provided a better explanation for that clustering of disappointed expectations than a sharp decline in aggregate demand. That’s what happened in the Great Depression, as Ralph Hawtrey and Gustav Cassel and Irving Fisher and Maynard Keynes understood, and that’s what happened in the Little Depression, as Market Monetarists, especially Scott Sumner, understand. Everything else is just commentary.

Stephen Williamson Gets Stuck at the Zero Lower Bound

Stephen Williamson started quite a ruckus on the econblogosphere with his recent posts arguing that, contrary to the express intentions of the FOMC, Quantitative Easing has actually caused inflation to go down. Whether Williamson’s discovery will have any practical effect remains to be seen, but in the meantime, there has been a lot head-scratching by Williamson’s readers trying to figure out how he reached such a counterintuitive conclusion. I apologize for getting to this discussion so late, but I have been trying off and on, amid a number of distractions, including travel to Switzerland where I am now visiting, to think my way through this discussion for the past several days. Let’s see if I have come up with anything enlightening to contribute.

The key ideas that Williamson relies on to derive his result are the standard ones of a real and a nominal interest rate that are related to each other by way of the expected rate of inflation (though Williamson does not distinguish between expected and annual inflation, that distinction perhaps not existing in his rational-expectations universe). The nominal rate must equal the real rate plus the expected rate of inflation. One way to think of the real rate is as the expected net pecuniary return (adjusted for inflation) from holding a real asset expressed as a percentage of the asset’s value, exclusive of any non-pecuniary benefits that it might provide (e.g., the aesthetic services provided by an art object to its owner). Insofar as an asset provides such services, the anticipated real return of the asset would be correspondingly reduced, and its current value enhanced compared to assets providing no non-pecuniary services. The value of assets providing additional non-pecuniary services includes a premium reflecting those services. The non-pecuniary benefit on which Williamson is focused is liquidity — the ease of buying or selling the asset at a price near its actual value — and the value enhancement accruing to assets providing such liquidity services is the liquidity premium.

Suppose that there are just two kinds of assets: real assets that generate (or are expected to do so) real pecuniary returns and money. Money provides liquidity services more effectively than any other asset. Now in any equilibrium in which both money and non-money assets are held, the expected net return from holding each asset must equal the expected net return from holding the other. If money, at the margin, is providing net liquidity services provided by no other asset, the expected pecuniary yield from holding money must be correspondingly less than the expected yield on the alternative real asset. Otherwise people would just hold money rather than the real asset (equivalently, the value of real assets would have to fall before people would be willing to hold those assets).

Here’s how I understand what Williamson is trying to do. I am not confident in my understanding, because Williamson’s first post was very difficult to follow. He started off with a series of propositions derived from Milton Friedman’s argument about the optimality of deflation at the real rate of interest, which implies a zero nominal interest rate, making it costless to hold money. Liquidity would be free, and the liquidity premium would be zero.

From this Friedmanian analysis of the optimality of expected deflation at a rate equal to the real rate of interest, Williamson transitions to a very different argument in which the zero lower bound does not eliminate the liquidity premium. Williamson posits a liquidity premium on bonds, the motivation for which being that bonds are useful by being readily acceptable as collateral. Williamson posits this liquidity premium as a fact, but without providing evidence, just an argument that the financial crisis destroyed or rendered unusable lots of assets that previously were, or could have been, used as collateral, thereby making Treasury bonds of short duration highly liquid and imparting to them a liquidity premium. If both bonds and money are held, and both offer the same zero nominal pecuniary return, then an equal liquidity premium must accrue to both bonds and money.

But something weird seems to have happened. We are supposed to be at the zero lower bound, and bonds and money are earning a liquidity premium, which means that the real pecuniary yield on bonds and money is negative, which contradicts Friedman’s proposition that a zero nominal interest rate implies that holding money is costless and that there is no liquidity premium. As best as I can figure this out, Williamson seems to be assuming that the real yield on real (illiquid) capital is positive, so that the zero lower bound is really an illusion, a mirage created by the atypical demand for government bonds for use as collateral.

As I suggested before, this is an empirical claim, and it should be possible to provide empirical support for the proposition that there is an unusual liquidity premium attaching to government debt of short duration in virtue of its superior acceptability as collateral. One test of the proposition would be to compare the yields on government debt of short duration versus non-government debt of short duration. A quick check here indicates that the yields on 90-day commercial paper issued by non-financial firms are very close to zero, suggesting to me that government debt of short duration is not providing any liquidity premium. If so, then the expected short-term yield on real capital may not be significantly greater than the yield on government debt, so that we really are at the zero lower bound rather than at a pseudo-zero lower bound as Williamson seems to be suggesting.

Given his assumption that there is a significant liquidity premium attaching to money and short-term government debt, I understand Williamson to be making the following argument about Quantitative Easing. There is a shortage of government debt in the sense that the public would like to hold more government debt than is being supplied. Since the federal budget deficit is rapidly shrinking, leaving the demand for short-term government debt unsatisfied, quantitative easing at least provides the public with the opportunity to exchange their relatively illiquid long-term government debt for highly liquid bank reserves created by the Fed. By so doing, the Fed is reducing the liquidity premium. But at the pseudo-zero-lower bound, a reduction in the liquidity premium implies a reduced rate of inflation, because it is the expected rate of inflation that reduces the expected return on holding money to offset the liquidity yield provided by money.

Williamson argues that by reducing the liquidity premium on holding money, QE has been the cause of the steadily declining rate of inflation over the past three years. This is a very tricky claim, because, even if we accept Williamson’s premises, he is leaving something important out of the analysis. Williamson’s argument is really about the effect of QE on expected inflation in equilibrium. But he pays no attention to the immediate effect of a change in the liquidity premium. If people reduce their valuation of money, because it is providing a reduced level of liquidity services, that change must be reflected in an immediate reduction in the demand to hold money, which would imply an immediate shift out of money into other assets. In other words, the value of money must fall. Conceptually, this would be an instantaneous, once and for all change, but if Williamson’s analysis is correct, the immediate once and for all changes should have been reflected in increased measured rates of inflation even though inflation expectations were falling. So it seems to me that the empirical fact of observed declines in the rate of inflation that motivates Williamson’s analysis turns out to be inconsistent with the implications of his analysis.

The State We’re In

Last week, Paul Krugman, set off by this blog post, complained about the current state macroeconomics. Apparently, Krugman feels that if saltwater economists like himself were willing to accommodate the intertemporal-maximization paradigm developed by the freshwater economists, the freshwater economists ought to have reciprocated by acknowledging some role for countercyclical policy. Seeing little evidence of accommodation on the part of the freshwater economists, Krugman, evidently feeling betrayed, came to this rather harsh conclusion:

The state of macro is, in fact, rotten, and will remain so until the cult that has taken over half the field is somehow dislodged.

Besides engaging in a pretty personal attack on his fellow economists, Krugman did not present a very flattering picture of economics as a scientific discipline. What Krugman describes seems less like a search for truth than a cynical bargaining game, in which Krugman feels that his (saltwater) side, after making good faith offers of cooperation and accommodation that were seemingly accepted by the other (freshwater) side, was somehow misled into making concessions that undermined his side’s strategic position. What I found interesting was that Krugman seemed unaware that his account of the interaction between saltwater and freshwater economists was not much more flattering to the former than the latter.

Krugman’s diatribe gave Stephen Williamson an opportunity to scorn and scold Krugman for a crass misunderstanding of the progress of science. According to Williamson, modern macroeconomics has passed by out-of-touch old-timers like Krugman. Among modern macroeconomists, Williamson observes, the freshwater-saltwater distinction is no longer meaningful or relevant. Everyone is now, more or less, on the same page; differences are worked out collegially in seminars, workshops, conferences and in the top academic journals without the rancor and disrespect in which Krugman indulges himself. If you are lucky (and hard-working) enough to be part of it, macroeconomics is a great place to be. One can almost visualize the condescension and the pity oozing from Williamson’s pores for those not part of the charmed circle.

Commenting on this exchange, Noah Smith generally agreed with Williamson that modern macroeconomics is not a discipline divided against itself; the intetermporal maximizers are clearly dominant. But Noah allows himself to wonder whether this is really any cause for celebration – celebration, at any rate, by those not in the charmed circle.

So macro has not yet discovered what causes recessions, nor come anywhere close to reaching a consensus on how (or even if) we should fight them. . . .

Given this state of affairs, can we conclude that the state of macro is good? Is a field successful as long as its members aren’t divided into warring camps? Or should we require a science to give us actual answers? And if we conclude that a science isn’t giving us actual answers, what do we, the people outside the field, do? Do we demand that the people currently working in the field start producing results pronto, threatening to replace them with people who are currently relegated to the fringe? Do we keep supporting the field with money and acclaim, in the hope that we’re currently only in an interim stage, and that real answers will emerge soon enough? Do we simply conclude that the field isn’t as fruitful an area of inquiry as we thought, and quietly defund it?

All of this seems to me to be a side issue. Who cares if macroeconomists like each other or hate each other? Whether they get along or not, whether they treat each other nicely or not, is really of no great import. For example, it was largely at Milton Friedman’s urging that Harry Johnson was hired to be the resident Keynesian at Chicago. But almost as soon as Johnson arrived, he and Friedman were getting into rather unpleasant personal exchanges and arguments. And even though Johnson underwent a metamorphosis from mildly left-wing Keynesianism to moderately conservative monetarism during his nearly two decades at Chicago, his personal and professional relationship with Friedman got progressively worse. And all of that nastiness was happening while both Friedman and Johnson were becoming dominant figures in the economics profession. So what does the level of collegiality and absence of personal discord have to do with the state of a scientific or academic discipline? Not all that much, I would venture to say.

So when Scott Sumner says:

while Krugman might seem pessimistic about the state of macro, he’s a Pollyanna compared to me. I see the field of macro as being completely adrift

I agree totally. But I diagnose the problem with macro a bit differently from how Scott does. He is chiefly concerned with getting policy right, which is certainly important, inasmuch as policy, since early 2008, has, for the most part, been disastrously wrong. One did not need a theoretically sophisticated model to see that the FOMC, out of misplaced concern that inflation expectations were becoming unanchored, kept money way too tight in 2008 in the face of rising food and energy prices, even as the economy was rapidly contracting in the second and third quarters. And in the wake of the contraction in the second and third quarters and a frightening collapse and panic in the fourth quarter, it did not take a sophisticated model to understand that rapid monetary expansion was called for. That’s why Scott writes the following:

All we really know is what Milton Friedman knew, with his partial equilibrium approach. Monetary policy drives nominal variables.  And cyclical fluctuations caused by nominal shocks seem sub-optimal.  Beyond that it’s all conjecture.

Ahem, and Marshall and Wicksell and Cassel and Fisher and Keynes and Hawtrey and Robertson and Hayek and at least 25 others that I could easily name. But it’s interesting to note that, despite his Marshallian (anti-Walrasian) proclivities, it was Friedman himself who started modern macroeconomics down the fruitless path it has been following for the last 40 years when he introduced the concept of the natural rate of unemployment in his famous 1968 AEA Presidential lecture on the role of monetary policy. Friedman defined the natural rate of unemployment as:

the level [of unemployment] that would be ground out by the Walrasian system of general equilibrium equations, provided there is embedded in them the actual structural characteristics of the labor and commodity markets, including market imperfections, stochastic variability in demands and supplies, the costs of gathering information about job vacancies, and labor availabilities, the costs of mobility, and so on.

Aside from the peculiar verb choice in describing the solution of an unknown variable contained in a system of equations, what is noteworthy about his definition is that Friedman was explicitly adopting a conception of an intertemporal general equilibrium as the unique and stable solution of that system of equations, and, whether he intended to or not, appeared to be suggesting that such a concept was operationally useful as a policy benchmark. Thus, despite Friedman’s own deep skepticism about the usefulness and relevance of general-equilibrium analysis, Friedman, for whatever reasons, chose to present his natural-rate argument in the language (however stilted on his part) of the Walrasian general-equilibrium theory for which he had little use and even less sympathy.

Inspired by the powerful policy conclusions that followed from the natural-rate hypothesis, Friedman’s direct and indirect followers, most notably Robert Lucas, used that analysis to transform macroeconomics, reducing macroeconomics to the manipulation of a simplified intertemporal general-equilibrium system. Under the assumption that all economic agents could correctly forecast all future prices (aka rational expectations), all agents could be viewed as intertemporal optimizers, any observed unemployment reflecting the optimizing choices of individuals to consume leisure or to engage in non-market production. I find it inconceivable that Friedman could have been pleased with the direction taken by the economics profession at large, and especially by his own department when he departed Chicago in 1977. This is pure conjecture on my part, but Friedman’s departure upon reaching retirement age might have had something to do with his own lack of sympathy with the direction that his own department had, under Lucas’s leadership, already taken. The problem was not so much with policy, but with the whole conception of what constitutes macroeconomic analysis.

The paper by Carlaw and Lipsey, which I referenced in my previous post, provides just one of many possible lines of attack against what modern macroeconomics has become. Without in any way suggesting that their criticisms are not weighty and serious, I would just point out that there really is no basis at all for assuming that the economy can be appropriately modeled as being in a continuous, or nearly continuous, state of general equilibrium. In the absence of a complete set of markets, the Arrow-Debreu conditions for the existence of a full intertemporal equilibrium are not satisfied, and there is no market mechanism that leads, even in principle, to a general equilibrium. The rational-expectations assumption is simply a deus-ex-machina method by which to solve a simplified model, a method with no real-world counterpart. And the suggestion that rational expectations is no more than the extension, let alone a logical consequence, of the standard rationality assumptions of basic economic theory is transparently bogus. Nor is there any basis for assuming that, if a general equilibrium does exist, it is unique, and that if it is unique, it is necessarily stable. In particular, in an economy with an incomplete (in the Arrow-Debreu sense) set of markets, an equilibrium may very much depend on the expectations of agents, expectations potentially even being self-fulfilling. We actually know that in many markets, especially those characterized by network effects, equilibria are expectation-dependent. Self-fulfilling expectations may thus be a characteristic property of modern economies, but they do not necessarily produce equilibrium.

An especially pretentious conceit of the modern macroeconomics of the last 40 years is that the extreme assumptions on which it rests are the essential microfoundations without which macroeconomics lacks any scientific standing. That’s preposterous. Perfect foresight and rational expectations are assumptions required for finding the solution to a system of equations describing a general equilibrium. They are not essential properties of a system consistent with the basic rationality propositions of microeconomics. To insist that a macroeconomic theory must correspond to the extreme assumptions necessary to prove the existence of a unique stable general equilibrium is to guarantee in advance the sterility and uselessness of that theory, because the entire field of study called macroeconomics is the result of long historical experience strongly suggesting that persistent, even cumulative, deviations from general equilibrium have been routine features of economic life since at least the early 19th century. That modern macroeconomics can tell a story in which apparently large deviations from general equilibrium are not really what they seem is not evidence that such deviations don’t exist; it merely shows that modern macroeconomics has constructed a language that allows the observed data to be classified in terms consistent with a theoretical paradigm that does not allow for lapses from equilibrium. That modern macroeconomics has constructed such a language is no reason why anyone not already committed to its underlying assumptions should feel compelled to accept its validity.

In fact, the standard comparative-statics propositions of microeconomics are also based on the assumption of the existence of a unique stable general equilibrium. Those comparative-statics propositions about the signs of the derivatives of various endogenous variables (price, quantity demanded, quantity supplied, etc.) with respect to various parameters of a microeconomic model involve comparisons between equilibrium values of the relevant variables before and after the posited parametric changes. All such comparative-statics results involve a ceteris-paribus assumption, conditional on the existence of a unique stable general equilibrium which serves as the starting and ending point (after adjustment to the parameter change) of the exercise, thereby isolating the purely hypothetical effect of a parameter change. Thus, as much as macroeconomics may require microfoundations, microeconomics is no less in need of macrofoundations, i.e., the existence of a unique stable general equilibrium, absent which a comparative-statics exercise would be meaningless, because the ceteris-paribus assumption could not otherwise be maintained. To assert that macroeconomics is impossible without microfoundations is therefore to reason in a circle, the empirically relevant propositions of microeconomics being predicated on the existence of a unique stable general equilibrium. But it is precisely the putative failure of a unique stable intertemporal general equilibrium to be attained, or to serve as a powerful attractor to economic variables, that provides the rationale for the existence of a field called macroeconomics.

So I certainly agree with Krugman that the present state of macroeconomics is pretty dismal. However, his own admitted willingness (and that of his New Keynesian colleagues) to adopt a theoretical paradigm that assumes the perpetual, or near-perpetual, existence of a unique stable intertemporal equilibrium, or at most admits the possibility of a very small set of deviations from such an equilibrium, means that, by his own admission, Krugman and his saltwater colleagues also bear a share of the responsibility for the very state of macroeconomics that Krugman now deplores.

What Kind of Equilibrium Is This?

In my previous post, I suggested that Stephen Williamson’s views about the incapacity of monetary policy to reduce unemployment, and his fears that monetary expansion would simply lead to higher inflation and a repeat of the bad old days the 1970s when inflation and unemployment spun out of control, follow from a theoretical presumption that the US economy is now operating (as it almost always does) in the neighborhood of equilibrium. This does not seem right to me, but it is the sort of deep theoretical assumption (e.g., like the rationality of economic agents) that is not subject to direct empirical testing. It is part of what the philosopher Imre Lakatos called the hard core of a (in this case Williamson’s) scientific research program. Whatever happens, Williamson will process the observed facts in terms of a theoretical paradigm in which prices adjust and markets clear. No other way of viewing reality makes sense, because Williamson cannot make any sense of it in terms of the theoretical paradigm or world view to which he is committed. I actually have some sympathy with that way of looking at the world, but not because I think it’s really true; it’s just the best paradigm we have at the moment. But I don’t want to follow that line of thought too far now, but who knows, maybe another time.

A good illustration of how Williamson understands his paradigm was provided by blogger J. P. Koning in his comment on my previous post copying the following quotation from a post written by Williamson a couple of years on his blog.

In other cases, as in the link you mention, there are people concerned about disequilibrium phenomena. These approaches are or were popular in Europe – I looked up Benassy and he is still hard at work. However, most of the mainstream – and here I’m including New Keynesians – sticks to equilibrium economics. New Keynesian models may have some stuck prices and wages, but those models don’t have to depart much from standard competitive equilibrium (or, if you like, competitive equilibrium with monopolistic competition). In those models, you have to determine what a firm with a stuck price produces, and that is where the big leap is. However, in terms of determining everything mathematically, it’s not a big deal. Equilibrium economics is hard enough as it is, without having to deal with the lack of discipline associated with “disequilibrium.” In equilibrium economics, particularly monetary equilibrium economics, we have all the equilibria (and more) we can handle, thanks.

I actually agree that departing from the assumption of equilibrium can involve a lack of discipline. Market clearing is a very powerful analytical tool, and to give it up without replacing it with an equally powerful analytical tool leaves us theoretically impoverished. But Williamson seems to suggest (or at least leaves ambiguous) that there is only one kind of equilibrium that can be handled theoretically, namely a fully optimal general equilibrium with perfect foresight (i.e., rational expectations) or at least with a learning process leading toward rational expectations. But there are other equilibrium concepts that preserve market clearing, but without imposing, what seems to me, the unreasonable condition of rational expectations and (near) optimality.

In particular, there is the Hicksian concept of a temporary equilibrium (inspired by Hayek’s discussion of intertemporal equilibrium) which allows for inconsistent expectations by economic agents, but assumes market clearing based on supply and demand schedules reflecting those inconsistent expectations. Nearly 40 years ago, Earl Thompson was able to deploy that equilibrium concept to derive a sub-optimal temporary equilibrium with Keynesian unemployment and a role for countercyclical monetary policy in minimizing inefficient unemployment. I have summarized and discussed Thompson’s model previously in some previous posts (here, here, here, and here), and I hope to do a few more in the future. The model is hardly the last word, but it might at least serve as a starting point for thinking seriously about the possibility that not every state of the economy is an optimal equilibrium state, but without abandoning market clearing as an analytical tool.

Too Little, Too Late?

The FOMC, after over four years of overly tight monetary policy, seems to be feeling its way toward an easier policy stance. But will it do any good? Unfortunately, there is reason to doubt that it will. The FOMC statement pledges to continue purchasing $85 billion a month of Treasuries and mortgage-backed securities and to keep interest rates at current low levels until the unemployment rate falls below 6.5% or the inflation rate rises above 2.5%. In other words, the Fed is saying that it will tolerate an inflation rate only marginally higher than the current target for inflation before it begins applying the brakes to the expansion. Here is how the New York Times reported on the Fed announcement.

The Federal Reserve said Wednesday it planned to hold short-term interest rates near zero so long as the unemployment rate remains above 6.5 percent, reinforcing its commitment to improve labor market conditions.

The Fed also said that it would continue in the new year its monthly purchases of $85 billion in Treasury bonds and mortgage-backed securities, the second prong of its effort to accelerate economic growth by reducing borrowing costs.

But Fed officials still do not expect the unemployment rate to fall below the new target for at least three more years, according to forecasts also published Wednesday, and they chose not to expand the Fed’s stimulus campaign.

In fairness to the FOMC, the Fed, although technically independent, must operate within an implicit consensus on what kind of decisions it can take, its freedom of action thereby being circumscribed in the absence of a clear signal of support from the administration for a substantial departure from the terms of the implicit consensus. For the Fed to substantially raise its inflation target would risk a political backlash against it, and perhaps precipitate a deep internal split within the Fed’s leadership. At the depth of the financial crisis and in its immediate aftermath, perhaps Chairman Bernanke, if he had been so inclined, might have been able to effect a drastic change in monetary policy, but that window of opportunity closed quickly once the economy stopped contracting and began its painfully slow pseudo recovery.

As I have observed a number of times (here, here, and here), the paradigm for the kind of aggressive monetary easing that is now necessary is FDR’s unilateral decision to take the US off the gold standard in 1933. But FDR was a newly elected President with a massive electoral mandate, and he was making decisions in the midst of the worst economic crisis in modern times. Could an unelected technocrat (or a collection of unelected technocrats) take such actions on his (or their) own? From the get-go, the Obama administration showed no inclination to provide any significant input to the formulation of monetary policy, either out of an excess of scruples about Fed independence or out of a misguided belief that monetary policy was powerless to affect the economy when interest rates were close to zero.

Stephen Williamson, on his blog, consistently gives articulate expression to the doctrine of Fed powerlessness. In a post yesterday, correctly anticipating that the Fed would continue its program of buying mortgage backed securities and Treasuries, and would tie its policy to numerical triggers relating to unemployment, Williamson disdainfully voiced his skepticism that the Fed’s actions would have any positive effect on the real performance of the economy, while registering his doubts that the Fed would be any more successful in preventing inflation from getting out of hand while attempting to reduce unemployment than it was in the 1970s.

It seems to me that Williamson reaches this conclusion based on the following premises. The Fed has little or no control over interest rates or inflation, and the US economy is not far removed from its equilibrium growth path. But Williamson also believes that the Fed might be able to increase inflation, and that that would be a bad thing if the Fed were actually to do so.  The Fed can’t do any good, but it could do harm.

Williamson is fairly explicit in saying that he doubts the ability of positive QE to stimulate, and negative QE (which, I guess, might be called QT) to dampen real or nominal economic activity.

Short of a theory of QE – or more generally a serious theory of the term structure of interest rates – no one has a clue what the effects are, if any. Until someone suggests something better, the best guess is that QE is irrelevant. Any effects you think you are seeing are either coming from somewhere else, or have to do with what QE signals for the future policy rate. The good news is that, if it’s irrelevant, it doesn’t do any harm. But if the FOMC thinks it works when it doesn’t, that could be a problem, in that negative QE does not tighten, just as positive QE does not ease.

But Williamson seems a bit uncertain about the effects of “forward guidance” i.e., the Fed’s commitment to keep interest rates low for an extended period of time, or until a trigger is pulled e.g., unemployment falls below a specified level. This is where Williamson sees a real potential for mischief.

(1)To be well-understood, the triggers need to be specified in a very simple form. As such it seems as likely that the Fed will make a policy error if it commits to a trigger as if it commits to a calendar date. The unemployment rate seems as good a variable as any to capture what is going on in the real economy, but as such it’s pretty bad. It’s hardly a sufficient statistic for everything the Fed should be concerned with.

(2)This is a bad precedent to set, for two reasons. First, the Fed should not be setting numerical targets for anything related to the real side of the dual mandate. As is well-known, the effect of monetary policy on real economic activity is transient, and the transmission process poorly understood. It would be foolish to pretend that we know what the level of aggregate economic activity should be, or that the Fed knows how to get there. Second, once you convince people that triggers are a good idea in this “unusual” circumstance, those same people will wonder what makes other circumstances “normal.” Why not just write down a Taylor rule for the Fed, and send the FOMC home? Again, our knowledge of how the economy works, and what future contingencies await us, is so bad that it seems optimal, at least to me, that the Fed make it up as it goes along.

I agree that a fixed trigger is a very blunt instrument, and it is hard to know what level to set it at. In principle, it would be preferable if the trigger were not pulled automatically, but only as a result of some exercise of discretionary judgment by the part of the monetary authority; except that the exercise of discretion may undermine the expectational effect of setting a trigger. Williamson’s second objection strikes me as less persuasive than the first. It is at least misleading, and perhaps flatly wrong, to say that the effect of monetary policy on real economic activity is transient. The standard argument for the ineffectiveness of monetary policy involves an exercise in which the economy starts off at equilibrium. If you take such an economy and apply a monetary stimulus to it, there is a plausible (but not necessarily unexceptionable) argument that the long-run effect of the stimulus will be nil, and any transitory gain in output and employment may be offset (or outweighed) by a subsequent transitory loss. But if the initial position is out of equilibrium, I am unaware of any plausible, let alone compelling, argument that monetary stimulus would not be effective in hastening the adjustment toward equilibrium. In a trivial sense, the effect of monetary policy is transient inasmuch as the economy would eventually reach an equilibrium even without monetary stimulus. However, unlike the case in which monetary stimulus is applied to an economy in equilibrium, applying monetary policy to an economy out of equilibrium can produce short-run gains that aren’t wiped out by subsequent losses. I am not sure how to interpret the rest of Williamson’s criticism. One might almost interpret him as saying that he would favor a policy of targeting nominal GDP (which bears a certain family resemblance to the Taylor rule), a policy that would also address some of the other concerns Williamson has about the Fed’s choice of triggers, except that Williamson is already on record in opposition to NGDP targeting.

In reply to a comment on this post, Williamson made the following illuminating observation:

Read James Tobin’s paper, “How Dead is Keynes?” referenced in my previous post. He was writing in June 1977. The unemployment rate is 7.2%, the cpi inflation rate is 6.7%, and he’s complaining because he thinks the unemployment rate is disastrously high. He wants more accommodation. Today, I think we understand the reasons that the unemployment rate was high at the time, and we certainly don’t think that monetary policy was too tight in mid-1977, particularly as inflation was about to take off into the double-digit range. Today, I don’t think the labor market conditions we are looking at are the result of sticky price/wage inefficiencies, or any other problem that monetary policy can correct.

The unemployment rate in 1977 was 7.2%, at least one-half a percentage point less than the current rate, and the cpi inflation rate was 6.7% nearly 5% higher than the current rate. Just because Tobin was overly disposed toward monetary expansion in 1977 when unemployment was less and inflation higher than they are now, it does not follow that monetary expansion now would be as misguided as it was in 1977. Williamson is convinced that the labor market is now roughly in equilibrium, so that monetary expansion would lead us away from, not toward, equilibrium. Perhaps it would, but most informed observers simply don’t share Williamson’s intuition that the current state of the economy is not that far from equilibrium. Unless you buy that far-from-self-evident premise, the case for monetary expansion is hard to dispute.  Nevertheless, despite his current unhappiness, I am not so sure that Williamson will be as upset with what the actual policy that the Fed is going to implement as he seems to think he will be.  The Fed is moving in the right direction, but is only taking baby steps.

PS I see that Williamson has now posted his reaction to the Fed’s statement.  Evidently, he is not pleased.  Perhaps I will have something more to say about that tomorrow.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,263 other subscribers
Follow Uneasy Money on WordPress.com