Archive for February, 2014

Exposed: Irrational Inflation-Phobia at the Fed Caused the Panic of 2008

Matthew O’Brien at The Atlantic has written a marvelous account of the bizarre deliberations of the Federal Open Market Committee at its meetings (June 25 and August 5) before the Lehman debacle on September 15 2008 and its meeting the next day on September 16. A few weeks ago, I wrote in half-seriousness a post attributing the 2008 financial crisis to ethanol because of the runup in corn and other grain prices in 2008 owing to the ethanol mandate and the restrictions on imported ethanol products. But ethanol, as several commenters pointed out, was only a part, probably a relatively small part, of the spike in commodities prices in the summer of 2008. Thanks to O’Brien’s careful reading of the recently released transcripts of the 2008 meetings of the FOMC, we now have a clear picture of how obsessed the FOMC was about inflation, especially the gang of four regional bank presidents, Charles Plosser, Richard Fisher, James Lacker, and Thomas Hoenig, supported to a greater or lesser extent by James Bullard and Kevin Warsh.

On the other hand, O’Brien does point out that two members of the FOMC, Eric Rosengren, President of the Boston Fed, and Fredric Mishkin of the Board of Governors, consistently warned of the dangers of a financial crisis, and consistently objected to and cogently punctured the hysterical inflation fears of the gang of four. It is somewhat, but only somewhat, reassuring that Janet Yellen was slightly more sensitive to the dangers of a financial crisis and less concerned about inflation than Ben Bernanke. Perhaps because he was still getting his feet wet as chairman, Bernanke seems to have been trying to articulate a position that could balance the opposing concerns of the FOMC membership, rather than leading the FOMC in the direction he thought best. While Yellen did not indulge the inflation phobia of the gang of four, she did not strongly support Rosengren and Mishkin in calling for aggressive action to avert the crisis that they clearly saw looming on the horizon.

Here are some highlights from O’Brien’s brilliant piece:

[FOMC Meeting] June 24-25, 2008: 468 mentions of inflation, 44 of unemployment, and 35 of systemic risks/crises

Those numbers pretty much tell you everything you need to know about what happened during the disastrous summer of 2008 at the Fed

Rosengren wasn’t nearly as concerned with 5 percent headline inflation—and with good reason. He reminded his colleagues that “monetary policy is unlikely to have much effect on food and energy prices,” that “total [inflation] has tended to converge to core, and not the opposite,” and that there was a “lack of an upward trend of wages and salaries.”

In short, inflation was high today, but it wouldn’t be tomorrow. They should ignore it. A few agreed. Most didn’t.

Mishkin, Fed Governor Donald Kohn, and then-San Francisco Fed chief Janet Yellen comprised Team: Ignore Inflation. They pointed out that core inflation hadn’t actually risen, and that “inflation expectations remain reasonably well-anchored.” The rest of the Fed, though, was eager to raise rates soon, if not right away. Philadelphia Fed president Charles Plosser recognized that core inflation was flat, but still thought they needed to get ready to tighten “or our credibility could soon vanish.” Fed Governor Kevin Warsh said that “inflation risks, in my view, continue to predominate as the greater risk to the economy,” because he thought headline would get passed into core inflation.

And let us not forget Richard Fisher of the Dallas Fed who provided badly needed comic relief.

And then there was Dallas Fed chief Richard Fisher, who had a singular talent for seeing inflation that nobody else could—a sixth sense, if you will. He was allergic to data. He preferred talking to CEOs instead. But, in Fisher’s case, the plural of anecdote wasn’t data. It was nonsense. He was worried about Frito-Lays increasing prices 9 percent, Budweiser increasing them 3.5 percent, and a small dry-cleaning chain in Dallas increasing them, well, an undisclosed amount. He even half-joked that the Fed was giving out smaller bottles of water, presumably to hide creeping inflation?

By the way, I notice that these little bottles of water have gotten smaller—this will be a Visine bottle at the next meeting. [Laughter]

But it was another member of the Gang of Four who warned ominously:

Richmond Fed president Jeffrey Lacker suggested, that “at some point we’re going to choose to let something disruptive happen.”

Now to the August meeting:

[FOMC Meeting] August 5, 2008: 322 mentions of inflation, 28 of unemployment, and 19 of systemic risks/crises.

Despite evidence that the inflationary blip of spring and summer was winding down, and the real economy was weakening, the Gang of Four continued to press their case for tougher anti-inflation measures. But only Rosengren and Mishkin spoke out against them.

But even though inflation was falling, it was a lonesome time to be a dove. As the Fed’s resident Cassandra, Rosengren tried to convince his colleagues that high headline inflation numbers “appear to be transitory responses to supply shocks that are not flowing through to labor markets.” In other words, inflation would come down on its own, and the Fed should focus on the credit crunch instead. Mishkin worried that “really bad things could happen” if “a shoe drops” and there was a “nasty, vicious spiral” between weak banks and a weak economy. Given this, he wanted to wait to tighten until inflation expectations “actually indicate there is a problem,” and not before.

But Richard Fisher was in no mood to worry about horror stories unless they were about runaway inflation:

The hawks didn’t want to wait. Lacker admitted that wages hadn’t gone up, but thought that “if we wait until wage rates accelerate or TIPS measures spike, we will have waited too long.” He wanted the Fed to “be prepared to raise rates even if growth is not back to potential, and even if financial markets are not yet tranquil.” In other words, to fight nonexistent wage inflation today to prevent possible wage inflation tomorrow, never mind the crumbling economy. Warsh, for his part, kept insisting that “inflation risks are very real, and I believe that these are higher than growth risks.” And Fisher had more”chilling anecdotes”—as Bernanke jokingly called them—about inflation. This time, the culprit was Disney World and its 5 percent price increase for single-day tickets.

The FOMC was divided, but the inflation-phobes held the upper hand. Unwilling to challenge them, Bernanke appeased them by promising that his statement about future monetary policy after the meeting would be “be slightly hawkish—to indicate a slight uplift in policy.”

Frightened by what he was hearing, Mishkin reminded his colleagues of some unpleasant monetary history:

Remember that in the Great Depression, when—I can’t use the expression because it would be in the transcripts, but you know what I’m thinking—something hit the fan, [laughter] it actually occurred close to a year after the initial negative shock.

Mishkin also reminded his colleagues that the stance of monetary policy cannot be directly inferred from the federal funds rate.

I just very much hope that this Committee does not make this mistake because I have to tell you that the situation is scary to me. I’m holding two houses right now. I’m very nervous.

And now to the September meeting, the day after Lehman collapsed:

[FOMC meeting] September 16, 2008: 129 mentions of inflation, 26 of unemployment, and 4 of systemic risks/crises

Chillingly, Lacker and Hoenig did a kind of victory dance about the collapse of Lehman Brothers.

Lacker had gotten the “disruptive” event he had wanted, and he was pretty pleased about it. “What we did with Lehman I obviously think was good,” he said, because it would “enhance the credibility of any commitment that we make in the future to be willing to let an institution fail.” Hoenig concurred that it was the “right thing,” because it would suck moral hazard out of the market.

The rest of the Gang of Four and their allies remained focused like a laser on inflation.

Even though commodity prices and inflation expectations were both falling fast, Hoenig wanted the Fed to “look beyond the immediate crisis,” and recognize that “we also have an inflation issue.” Bullard thought that “an inflation problem is brewing.” Plosser was heartened by falling commodity prices, but said, “I remain concerned about the inflation outlook going forward,” because “I do not see the ongoing slowdown in economic activity is entirely demand driven.” And Fisher half-jokingly complained that the bakery he’d been going to for 30 years—”the best maker of not only bagels, but anything with Crisco in it”—had just increased prices. All of them wanted to leave rates unchanged at 2 percent.

Again, only Eric Rosengren seemed to be in touch with reality, but no was listening:

[Rosengren] was afraid that exactly what did end up happening would happen. That all the financial chaos “would have a significant impact on the real economy,” that “individuals and firms will be become risk averse, with reluctance to consume or invest,” that “credit spreads are rising, and the cost and availability of financing is becoming more difficult,” and that “deleveraging is likely to occur with a vengeance.” More than that, he thought that the “calculated bet” they took in letting Lehman fail would look particularly bad “if we have a run on the money market funds or if the nongovernment tri-party repo market shuts down.” He wanted to cut rates immediately to do what they could to offset the worsening credit crunch. Nobody else did.

Like Bernanke for instance. Here is his take on the situation:

Overall I believe that our current funds rate setting is appropriate, and I don’t really see any reason to change…. Cutting rates would be a very big step that would send a very strong signal about our views on the economy and about our intentions going forward, and I think we should view that step as a very discrete thing rather than as a 25 basis point kind of thing. We should be very certain about that change before we undertake it because I would be concerned, for example, about the implications for the dollar, commodity prices, and the like.

OMG!

O’Brien uses one of my favorite Hawtrey quotes to describe the insanity of the FOMC deliberations:

In other words, the Fed was just as worried about an inflation scare that was already passing as it was about a once-in-three-generations crisis.

It brought to mind what economist R. G. Hawtrey had said about the Great Depression. Back then, central bankers had worried more about the possibility of inflation than the grim reality of deflation. It was, Hawtrey said, like “crying Fire! Fire! in Noah’s flood.”

In any non-dysfunctional institution, the perpetrators of this outrage would have been sacked. But three of Gang of Four (Hoenig having become a director of the FDIC in 2012) remain safely ensconced in their exalted positions, blithely continuing, without the slightest acknowledgment of their catastrophic past misjudgments, to exert a malign influence on monetary policy. For shame!

Methodological Arrogance

A few weeks ago, I posted a somewhat critical review of Kartik Athreya’s new book Big Ideas in Macroeconomics. In quoting a passage from chapter 4 in which Kartik defended the rational-expectations axiom on the grounds that it protects the public from economists who, if left unconstrained by the discipline of rational expectations, could use expectational assumptions to generate whatever results they wanted, I suggested that this sort of reasoning in defense of the rational-expectations axiom betrayed what I called the “methodological arrogance” of modern macroeconomics which has, to a large extent, succeeded in imposing that axiom on all macroeconomic models. In his comment responding to my criticisms, Kartik made good-natured reference in passing to my charge of “methodological arrogance,” without substantively engaging with the charge. And in a post about the early reviews of Kartik’s book, Steve Williamson, while crediting me for at least reading the book before commenting on it, registered puzzlement at what I meant by “methodological arrogance.”

Actually, I realized when writing that post that I was not being entirely clear about what “methodological arrogance” meant, but I thought that my somewhat tongue-in-cheek reference to the duty of modern macroeconomists “to ban such models from polite discourse — certainly from the leading economics journals — lest the public be tainted by economists who might otherwise dare to abuse their models by making illicit assumptions about expectations formation and equilibrium concepts” was sufficiently suggestive not to require elaboration, especially after having devoted several earlier posts to criticisms of the methodology of modern macroeconomics (e.g., here, here, and here). That was a misjudgment.

So let me try to explain what I mean by methodological arrogance, which is not the quite the same as, but is closely related to, methodological authoritarianism. I will do so by referring to the long introductory essay (“A Realist View of Logic, Physics, and History”) that Karl Popper contributed to a book The Self and Its Brain co-authored with neuroscientist John Eccles. The chief aim of the essay was to argue that the universe is not fully determined, but evolves, producing new, emergent, phenomena not originally extant in the universe, such as the higher elements, life, consciousness, language, science and all other products of human creativity, which in turn interact with the universe, in fundamentally unpredictable ways. Popper regards consciousness as a real phenomenon that cannot be reduced to or explained by purely physical causes. Though he makes only brief passing reference to the social sciences, Popper’s criticisms of reductionism are directly applicable to the microfoundations program of modern macroeconomics, and so I think it will be useful to quote what he wrote at some length.

Against the acceptance of the view of emergent evolution there is a strong intuitive prejudice. It is the intuition that, if the universe consists of atoms or elementary particles, so that all things are structures of such particles, then every event in the universe ought to be explicable, and in principle predictable, in terms of particle structure and of particle interaction.

Notice how easy it would be rephrase this statement as a statement about microfoundations:

Against the acceptance of the view that there are macroeconomic phenomena, there is a strong intuitive prejudice. It is the intuition that, if the macroeconomy consists of independent agents, so that all macroeconomic phenomena are the result of decisions made by independent agents, then every macreconomic event ought to be explicable, and in principle predictable, in terms of the decisions of individual agents and their interactions.

Popper continues:

Thus we are led to what has been called the programme of reductionism [microfoundations]. In order to discuss it I shall make use of the following Table

(12) Level of ecosystems

(11) Level of populations of metazoan and plants

(10) Level of metezoa and multicellular plants

(9) Level of tissues and organs (and of sponges?)

(8) Level of populations of unicellular organisms

(7) Level of cells and of unicellular organisms

(6) Level of organelles (and perhaps of viruses)

(5) Liquids and solids (crystals)

(4) Molecules

(3) Atoms

(2) Elementary particles

(1) Sub-elementary particles

(0) Unknown sub-sub-elementary particles?

The reductionist idea behind this table is that the events or things on each level should be explained in terms of the lower levels. . . .

This reductionist idea is interesting and important; and whenever we can explain entities and events on a higher level by those of a lower level, we can speak of a great scientific success, and can say that we have added much to our understanding of the higher level. As a research programme, reductionism is not only important, but it is part of the programme of science whose aim is to explain and to understand.

So far so good. Reductionism certainly has its place. So do microfoundations. Whenever we can take an observation and explain it in terms of its constituent elements, we have accomplished something important. We have made scientific progress.

But Popper goes on to voice a cautionary note. There may be, and probably are, strict, perhaps insuperable, limits to how far higher-level phenomena can be reduced to (explained by) lower-level phenomena.

[E]ven the often referred to reduction of chemistry to physics, important as it is, is far from complete, and very possibly incompletable. . . . [W]e are far removed indeed from being able to claim that all, or most, properties of chemical compounds can be reduced to atomic theory. . . . In fact, the five lower levels of [our] Table . . . can be used to show that we have reason to regard this kind of intuitive reduction programme as clashing with some results of modern physics.

For what [our] Table suggests may be characterized as the principle of “upward causation.” This is the principle that causation can be traced in our Table . . . . from a lower level to a higher level, but not vice versa; that what happens on a higher level can be explained in terms of the next lower level, and ultimately in terms of elementary particles and the relevant physical laws. It appears at first that the higher levels cannot act on the lower ones.

But the idea of particle-to-particle or atom-to-atom interaction has been superseded by physics itself. A diffraction grating or a crystal (belonging to level (5) of our Table . . .) is a spatially very extended complex (and periodic) structure of billions of molecules; but it interacts as a whole extended periodic structure with the photons or the particles of a beam of photons or particles. Thus we have here an important example of “downward causation“. . . . That is to say, the whole, the macro structure, may, qua whole, act upon a photon or an elementary particle or an atom. . . .

Other physical examples of downward causation – of macroscopic structures on level (5) acting upon elementary particles or photons on level (1) – are lasers, masers, and holograms. And there are also many other macro structures which are examples of downward causation: every simple arrangement of negative feedback, such as a steam engine governor, is a macroscopic structure that regulates lower level events, such as the flow of the molecules that constitute the steam. Downward causation is of course important in all tools and machines which are designed for sompe purpose. When we use a wedge, for example, we do not arrange for the action of its elementary particles, but we use a structure, relying on it ot guide the actions of its constituent elementary particles to act, in concert, so as to achieve the desired result.

Stars are undersigned, but one may look at them as undersigned “machines” for putting the atoms and elementary particles in their central region under terrific gravitational pressure, with the (undersigned) result that some atomic nuclei fuse and form the nuclei of heavier elements; an excellent example of downward causation,of the action of the whole structure upon its constituent particles.

(Stars, incidentally, are good examples of the general rule that things are processes. Also, they illustrate the mistake of distinguishing between “wholes” – which are “more than the sums of their parts” – and “mere heaps”: a star is, in a sense, a “mere” accumulation, a “mere heap” of its constituent atoms. Yet it is a process – a dynamic structure. Its stability depends upon the dynamic equilibrium between its gravitational pressure, due to its sheer bulk, and the repulsive forces between its closely packed elementary particles. If the latter are excessive, the star explodes, If they are smaller than the gravitational pressure, it collapses into a “black hole.”

The most interesting examples of downward causation are to be found in organisms and in their ecological systems, and in societies of organisms [my emphasis]. A society may continue to function even though many of its members die; but a strike in an essential industry, such as the supply of electricity, may cause great suffering to many individual people. .. . I believe that these examples make the existence of downward causation obvious; and they make the complete success of any reductionist programme at least problematic.

I was very glad when I recently found this discussion of reductionism by Popper in a book that I had not opened for maybe 40 years, because it supports an argument that I have been making on this blog against the microfoundations program in macroeconomics: that as much as macroeconomics requires microfoundations, microeconomics also requires macrofoundations. Here is how I put a little over a year ago:

In fact, the standard comparative-statics propositions of microeconomics are also based on the assumption of the existence of a unique stable general equilibrium. Those comparative-statics propositions about the signs of the derivatives of various endogenous variables (price, quantity demanded, quantity supplied, etc.) with respect to various parameters of a microeconomic model involve comparisons between equilibrium values of the relevant variables before and after the posited parametric changes. All such comparative-statics results involve a ceteris-paribus assumption, conditional on the existence of a unique stable general equilibrium which serves as the starting and ending point (after adjustment to the parameter change) of the exercise, thereby isolating the purely hypothetical effect of a parameter change. Thus, as much as macroeconomics may require microfoundations, microeconomics is no less in need of macrofoundations, i.e., the existence of a unique stable general equilibrium, absent which a comparative-statics exercise would be meaningless, because the ceteris-paribus assumption could not otherwise be maintained. To assert that macroeconomics is impossible without microfoundations is therefore to reason in a circle, the empirically relevant propositions of microeconomics being predicated on the existence of a unique stable general equilibrium. But it is precisely the putative failure of a unique stable intertemporal general equilibrium to be attained, or to serve as a powerful attractor to economic variables, that provides the rationale for the existence of a field called macroeconomics.

And more recently, I put it this way:

The microeconomic theory of price adjustment is a theory of price adjustment in a single market. It is a theory in which, implicitly, all prices and quantities, but a single price-quantity pair are in equilibrium. Equilibrium in that single market is rapidly restored by price and quantity adjustment in that single market. That is why I have said that microeconomics rests on a macroeconomic foundation, and that is why it is illusory to imagine that macroeconomics can be logically derived from microfoundations. Microfoundations, insofar as they explain how prices adjust, are themselves founded on the existence of a macroeconomic equilibrium. Founding macroeconomics on microfoundations is just a form of bootstrapping.

So I think that my criticism of the microfoundations project exactly captures the gist of Popper’s criticism of reductionism. Popper extended his criticism of a certain form of reductionism, which he called “radical materialism or radical physicalism” in later passage in the same essay that is also worth quoting:

Radical materialism or radical physicalism is certainly a selfconsistent position. Fir it is a view of the universe which, as far as we know, was adequate once; that is, before the emergence of life and consciousness. . . .

What speaks in favour of radical materialism or radical physicalism is, of course, that it offers us a simple vision of a simple universe, and this looks attractive just because, in science, we search for simple theories. However, I think that it is important that we note that there are two different ways by which we can search for simplicity. They may be called, briefly, philosophical reduction and scientific reduction. The former is characterized by an attempt to provide bold and testable theories of high explanatory power. I believe that the latter is an extremely valuable and worthwhile method; while the former is of value only if we have good reasons to assume that it corresponds to the facts about the universe.

Indeed, the demand for simplicity in the sense of philosophical rather than scientific reduction may actually be damaging. For even in order to attempt scientific reduction, it is necessary for us to get a full grasp of the problem to be solved, and it is therefore vitally important that interesting problems are not “explained away” by philosophical analysis. If, say, more than one factor is responsible for some effect, it is important that we do not pre-empt the scientific judgment: there is always the danger that we might refuse to admit any ideas other than the ones we appear to have at hand: explaining away, or belittling the problem. The danger is increased if we try to settle the matter in advance by philosophical reduction. Philosophical reduction also makes us blind to the significance of scientific reduction.

Popper adds the following footnote about the difference between philosophic and scientific reduction.

Consider, for example, what a dogmatic philosophical reductionist of a mechanistic disposition (or even a quantum-mechanistic disposition) might have done in the face of the problem of the chemical bond. The actual reduction, so far as it goes, of the theory of the hydrogen bond to quantum mechanics is far more interesting than the philosophical assertion that such a reduction will one be achieved.

What modern macroeconomics now offers is largely an array of models simplified sufficiently so that they are solvable using the techniques of dynamic optimization. Dynamic optimization by individual agents — the microfoundations of modern macro — makes sense only in the context of an intertemporal equilibrium. But it is just the possibility that intertemporal equilibrium may not obtain that, to some of us at least, makes macroeconomics interesting and relevant. As the great Cambridge economist, Frederick Lavington, anticipating Popper in grasping the possibility of downward causation, put it so well, “the inactivity of all is the cause of the inactivity of each.”

So what do I mean by methodological arrogance? I mean an attitude that invokes microfoundations as a methodological principle — philosophical reductionism in Popper’s terminology — while dismissing non-microfounded macromodels as unscientific. To be sure, the progress of science may enable us to reformulate (and perhaps improve) explanations of certain higher-level phenomena by expressing those relationships in terms of lower-level concepts. That is what Popper calls scientific reduction. But scientific reduction is very different from rejecting, on methodological principle, any explanation not expressed in terms of more basic concepts.

And whenever macrotheory seems inconsistent with microtheory, the inconsistency poses a problem to be solved. Solving the problem will advance our understanding. But simply to reject the macrotheory on methodological principle without evidence that the microfounded theory gives a better explanation of the observed phenomena than the non-microfounded macrotheory (and especially when the evidence strongly indicates the opposite) is arrogant. Microfoundations for macroeconomics should result from progress in economic theory, not from a dubious methodological precept.

Let me quote Popper again (this time from his book Objective Knowledge) about the difference between scientific and philosophical reduction, addressing the denial by physicalists that that there is such a thing as consciousness, a denial based on their belief that all supposedly mental phenomena can and will ultimately be reduced to purely physical phenomena

[P]hilosophical speculations of a materialistic or physicalistic character are very interesting, and may even be able to point the way to a successful scientific reduction. But they should be frankly tentative theories. . . . Some physicalists do not, however, consider their theories as tentative, but as proposals to express everything in physicalist language; and they think these proposals have much in their favour because they are undoubtedly convenient: inconvenient problems such as the body-mind problem do indeed, most conveniently, disappear. So these physicalists think that there can be no doubt that these problems should be eliminated as pseudo-problems. (p. 293)

One could easily substitute “methodological speculations about macroeconomics” for “philosophical speculations of a materialistic or physicalistic character” in the first sentence. And in the third sentence one could substitute “advocates of microfounding all macroeconomic theories” for “physicalists,” “microeconomic” for “physicalist,” and “Phillips Curve” or “involuntary unemployment” for “body-mind problem.”

So, yes, I think it is arrogant to think that you can settle an argument by forcing the other side to use only those terms that you approve of.

Who’s Afraid of Say’s Law?

There’s been a lot of discussion about Say’s Law in the blogosphere lately, some of it finding its way into the comments section of my recent post “What Does Keynesisan Mean,” in which I made passing reference to Keynes’s misdirected tirade against Say’s Law in the General Theory. Keynes wasn’t the first economist to make a fuss over Say’s Law. It was a big deal in the nineteenth century when Say advanced what was then called the Law of the Markets, pointing out that the object of all production is, in the end, consumption, so that all productive activity ultimately constitutes a demand for other products. There were extended debates about whether Say’s Law was really true, with Say, Ricardo, James and John Stuart Mill all weighing on in favor of the Law, and Malthus and the French economist J. C. L. de Sismondi arguing against it. A bit later, Karl Marx also wrote at length about Say’s Law, heaping his ample supply of scorn upon Say and his Law. Thomas Sowell’s first book, I believe drawn from the doctoral dissertation he wrote under George Stigler, was about the classical debates about Say’s Law.

The literature about Say’s Law is too vast to summarize in a blog post. Here’s my own selective take on it.

Say was trying to refute a certain kind of explanation of economic crises, and what we now would call cyclical or involuntary unemployment, an explanation attributing such unemployment to excess production for which income earners don’t have enough purchasing power in their pockets to buy. Say responded that the reason why income earners had supplied the services necessary to produce the available output was to earn enough income to purchase the output. This is the basic insight behind the famous paraphrase (I don’t know if it was Keynes’s paraphrase or someone else’s) of Say’s Law — supply creates its own demand. If it were instead stated as products or services are supplied only because the suppliers want to buy other products or services, I think that it would be more in sync than the standard formulation with Say’s intent. Another way to think about Say’s Law is as a kind of conservation law.

There were two famous objections made to Say’s Law: first, current supply might be offered in order to save for future consumption, and, second, current supply might be offered in order to add to holdings of cash. In either case, there could be current supply that is not matched by current demand for output, so that total current demand would be insufficient to generate full employment. Both these objections are associated with Keynes, but he wasn’t the first to make either of them. The savings argument goes back to the nineteenth century, and the typical response was that if there was insufficient current demand, because the desire to save had increased, the public deciding to reduce current expenditures on consumption, the shortfall in consumption demand would lead to an increase in investment demand driven by falling interest rates and rising asset prices. In the General Theory, Keynes proposed an argument about liquidity preference and a potential liquidity trap, suggesting a reason why the necessary adjustment in the rate of interest would not necessarily occur.

Keynes’s argument about a liquidity trap was and remains controversial, but the argument that the existence of money implies that Say’s Law can be violated was widely accepted. Indeed, in his early works on business-cycle theory, F. A. Hayek made the point, seemingly without embarrassment or feeling any need to justify it at length, that the existence of money implied a disconnect between overall supply and overall demand, describing money as a kind of loose joint in the economic system. This argument, apparently viewed as so trivial or commonplace by Hayek that he didn’t bother proving it or citing authority for it, was eventually formalized by the famous market-socialist economist (who, for a number of years was a tenured professor at that famous bastion of left-wing economics the University of Chicago) Oskar Lange who introduced a distinction between Walras’s Law and Say’s Law (“Say’s Law: A Restatement and Criticism”).

Walras’s Law says that the sum of all excess demands and excess supplies, evaluated at any given price vector, must identically equal zero. The existence of a budget constraint makes this true for each individual, and so, by the laws of arithmetic, it must be true for the entire economy. Essentially, this was a formalization of the logic of Say’s Law. However, Lange showed that Walras’s Law reduces to Say’s Law only in an economy without money. In an economy with money, Walras’s Law means that there could be an aggregate excess supply of all goods at some price vector, and the excess supply of goods would be matched by an equal excess demand for money. Aggregate demand would be deficient, and the result would be involuntary unemployment. Thus, according to Lange’s analysis, Say’s Law holds, as a matter of necessity, only in a barter economy. But in an economy with money, an excess supply of all real commodities was a logical possibility, which means that there could be a role for some type – the choice is yours — of stabilization policy to ensure that aggregate demand is sufficient to generate full employment. One of my regular commenters, Tom Brown, asked me recently whether I agreed with Nick Rowe’s statement: “the goal of good monetary policy is to try to make Say’s Law true.” I said that I wasn’t sure what the statement meant, thereby avoiding the need to go into a lengthy explanation about why I am not quite satisfied with that way of describing the goal of monetary policy.

There are at least two problems with Lange’s formulation of Say’s Law. The first was pointed out by Clower and Leijonhufvud in their wonderful paper (“Say’s Principle: What It Means and Doesn’t Mean” reprinted here and here) on what they called Say’s Principle in which they accepted Lange’s definition of Say’s Law, while introducing the alternative concept of Say’s Principle as the supply-side analogue of the Keynesian multiplier. The key point was to note that Lange’s analysis was based on the absence of trading at disequilibrium prices. If there is no trading at disequilibrium prices, because the Walrasian auctioneer or clearinghouse only processes information in a trial-and-error exercise aimed at discovering the equilibrium price vector, no trades being executed until the equilibrium price vector has been discovered (a discovery which, even if an equilibrium price vector exists, may not be made under any price-adjustment rule adopted by the auctioneer, rational expectations being required to “guarantee” that an equilibrium price vector is actually arrived at, sans auctioneer), then, indeed, Say’s Law need not obtain in notional disequilibrium states (corresponding to trial price vectors announced by the Walrasian auctioneer or clearinghouse). The insight of Clower and Leijonhufvud was that in a real-time economy in which trading is routinely executed at disequilibrium prices, transactors may be unable to execute the trades that they planned to execute at the prevailing prices. But when planned trades cannot be executed, trading and output contract, because the volume of trade is constrained by the lesser of the amount supplied and the amount demanded.

This is where Say’s Principle kicks in; If transactors do not succeed in supplying as much as they planned to supply at prevailing prices, then, depending on the condition of their balances sheets, and the condition of credit markets, transactors may have to curtail their demands in subsequent periods; a failure to supply as much as had been planned last period will tend reduce demand in this period. If the “distance” from equilibrium is large enough, the demand failure may even be amplified in subsequent periods, rather than damped. Thus, Clower and Leijonhufvud showed that the Keynesian multiplier was, at a deep level, really just another way of expressing the insight embodied in Say’s Law (or Say’s Principle, if you insist on distinguishing what Say meant from Lange’s reformulation of it in terms of Walrasian equilibrium).

I should add that, as I have mentioned in an earlier post, W. H. Hutt, in a remarkable little book, clarified and elaborated on the Clower-Leijonhufvud analysis, explaining how Say’s Principle was really implicit in many earlier treatments of business-cycle phenomena. The only reservation I have about Hutt’s book is that he used it to wage an unnecessary polemical battle against Keynes.

At about the same time that Clower and Leijonhufvud were expounding their enlarged view of the meaning and significance of Say’s Law, Earl Thompson showed that under “classical” conditions, i.e., a competitive supply of privately produced bank money (notes and deposits) convertible into gold, Say’s Law in Lange’s narrow sense, could also be derived in a straightforward fashion. The demonstration followed from the insight that when bank money is competitively issued, it is accomplished by an exchange of assets and liabilities between the bank and the bank’s customer. In contrast to the naïve assumption of Lange (adopted as well by his student Don Patinkin in a number of important articles and a classic treatise) that there is just one market in the monetary sector, there are really two markets in the monetary sector: a market for money supplied by banks and a market for money-backing assets. Thus, any excess demand for money would be offset not, as in the Lange schema, by an excess supply of goods, but by an excess supply of money-backing services. In other words, the public can increase their holdings of cash by giving their IOUs to banks in exchange for the IOUs of the banks, the difference being that the IOUs of the banks are money and the IOUs of customers are not money, but do provide backing for the money created by banks. The market is equilibrated by adjustments in the quantity of bank money and the interest paid on bank money, with no spillover on the real sector. With no spillover from the monetary sector onto the real sector, Say’s Law holds by necessity, just as it would in a barter economy.

A full exposition can be found in Thompson’s original article. I summarized and restated its analysis of Say’s Law in my 1978 1985 article on classical monetary theory and in my book Free Banking and Monetary Reform. Regrettably, I did not incorporate the analysis of Clower and Leijonhufvud and Hutt into my discussion of Say’s Law either in my article or in my book. But in a world of temporary equilibrium, in which future prices are not correctly foreseen by all transactors, there are no strict intertemporal budget constraints that force excess demands and excess supplies to add up to zero. In short, in such a world, things can get really messy, which is where the Clower-Leijonhufvud-Hutt analysis can be really helpful in sorting things out.

What Does “Keynesian” Mean?

Last week Simon Wren-Lewis wrote a really interesting post on his blog trying to find the right labels with which to identify macroeconomists. Simon, rather disarmingly, starts by admitting the ultimate futility of assigning people labels; reality is just too complicated to conform to the labels that we invent to help ourselves make sense of reality. A good label can provide us with a handle with which to gain a better grasp on a messy set of observations, but it is not the reality. And if you come up with one label, I may counter with a different one. Who’s to say which label is better?

At any rate, as I read through Simon’s post I found myself alternately nodding my head in agreement and shaking my head in disagreement. So staying in the spirit of fun in which Simon wrote his post, I will provide a commentary on his labels and other pronouncements. If the comments are weighted on the side of disagreement, well, that’s what makes blogging fun, n’est-ce pas?

Simon divides academic researchers into two groups (mainstream and heterodox) and macroeconomic policy into two approaches (Keynesian and anti-Keynesian). He then offers the following comment on the meaning of the label Keynesian.

Just think about the label Keynesian. Any sensible definition would involve the words sticky prices and aggregate demand. Yet there are still some economists (generally not academics) who think Keynesian means believing fiscal rather than monetary policy should be used to stabilise demand. Fifty years ago maybe, but no longer. Even worse are non-economists who think being a Keynesian means believing in market imperfections, government intervention in general and a mixed economy. (If you do not believe this happens, look at the definition in Wikipedia.)

Well, as I pointed out in a recent post, there is nothing peculiarly Keynesian about the assumption of sticky prices, especially not as a necessary condition for an output gap and involuntary unemployment. So if Simon is going to have to work harder to justify his distinction between Keynesian and anti-Keynesian. In a comment on Simon’s blog, Nick Rowe pointed out just this problem, asking in particular why Simon could not substitute a Monetarist/anti-Monetarist dichotomy for the Keynesian/anti-Keynesian one.

The story gets more complicated in Simon’s next paragraph in which he describes his dichotomy of academic research into mainstream and heterodox.

Thanks to the microfoundations revolution in macro, mainstream macroeconomists speak the same language. I can go to a seminar that involves an RBC model with flexible prices and no involuntary unemployment and still contribute and possibly learn something. Equally an economist like John Cochrane can and does engage in meaningful discussions of New Keynesian theory (pdf).

In other words, the range of acceptable macroeconomic models has been drastically narrowed. Unless it is microfounded in a dynamic stochastic general equilibrium model, a model does not qualify as “mainstream.” This notion of microfoundation is certainly not what Edmund Phelps meant by “microeconomic foundations” when he edited his famous volume Microeconomic Foundations of Employment and Inflation Theory, which contained, among others, Alchian’s classic paper on search costs and unemployment and a paper by the then not so well-known Robert Lucas and his early collaborator Leonard Rapping. Nevertheless, in the current consensus, it is apparently the New Classicals that determine what kind of model is acceptable, while New Keynesians are allowed to make whatever adjustments, mainly sticky wages, they need to derive Keynesian policy recommendations. Anyone who doesn’t go along with this bargain is excluded from the mainstream. Simon may not be happy with this state of affairs, but he seems to have made peace with it without undue discomfort.

Now many mainstream macroeconomists, myself included, can be pretty critical of the limitations that this programme can place on economic thinking, particularly if it is taken too literally by microfoundations purists. But like it or not, that is how most macro research is done nowadays in the mainstream, and I see no sign of this changing anytime soon. (Paul Krugman discusses some reasons why here.) My own view is that I would like to see more tolerance and a greater variety of modelling approaches, but a pragmatic microfoundations macro will and should remain the major academic research paradigm.

Thus, within the mainstream, there is no basic difference in how to create a macroeconomic model. The difference is just in how to tweak the model in order to derive the desired policy implication.

When it comes to macroeconomic policy, and keeping to the different language idea, the only significant division I see is between the mainstream macro practiced by most economists, including those in most central banks, and anti-Keynesians. By anti-Keynesian I mean those who deny the potential for aggregate demand to influence output and unemployment in the short term.

So, even though New Keynesians have learned how to speak the language of New Classicals, New Keynesians can console themselves in retaining the upper hand in policy discussions. Which is why in policy terms, Simon chooses a label that is at least suggestive of a certain Keynesian primacy, the other side being defined in terms of their opposition to Keynesian policy. Half apologetically, Simon then asks: “Why do I use the term anti-Keynesian rather than, say, New Classical?” After all, it’s the New Classical model that’s being tweaked. Simon responds:

Partly because New Keynesian economics essentially just augments New Classical macroeconomics with sticky prices. But also because as far as I can see what holds anti-Keynesians together isn’t some coherent and realistic view of the world, but instead a dislike of what taking aggregate demand seriously implies.

This explanation really annoyed Steve Williamson who commented on Simon’s blog as follows:

Part of what defines a Keynesian (new or old), is that a Keynesian thinks that his or her views are “mainstream,” and that the rest of macroeconomic thought is defined relative to what Keynesians think – Keynesians reside at the center of the universe, and everything else revolves around them.

Simon goes on to explain what he means by the incoherence of the anti-Keynesian view of the world, pointing out that the Pigou Effect, which supposedly invalidated Keynes’s argument that perfect wage and price flexibility would not eventually restore full employment to an economy operating at less than full employment, has itself been shown not to be valid. And then Simon invokes that old standby Say’s Law.

Second, the evidence that prices are not flexible is so overwhelming that you need something else to drive you to ignore this evidence. Or to put it another way, you need something pretty strong for politicians or economists to make the ‘schoolboy error’ that is Says Law, which is why I think the basis of the anti-Keynesian view is essentially ideological.

Here, I think, Simon is missing something important. It was a mistake on Keynes’s part to focus on Say’s Law as the epitome of everything wrong with “classical economics.” Actually Say’s Law is a description of what happens in an economy when trading takes place at disequilibrium prices. At disequilibrium prices, potential gains from trade are left on the table. Not only are they left on the table, but the effects can be cumulative, because the failure to supply implies a further failure to demand. The Keynesian spending multiplier is the other side of the coin of the supply-side contraction envisioned by Say. Even infinite wage and price flexibility may not help an economy in which a lot of trade is occurring at disequilibrium prices.

The microeconomic theory of price adjustment is a theory of price adjustment in a single market. It is a theory in which, implicitly, all prices and quantities, but a single price-quantity pair are in equilibrium. Equilibrium in that single market is rapidly restored by price and quantity adjustment in that single market. That is why I have said that microeconomics rests on a macroeconomic foundation, and that is why it is illusory to imagine that macroeconomics can be logically derived from microfoundations. Microfoundations, insofar as they explain how prices adjust, are themselves founded on the existence of a macroeconomic equilibrium. Founding macroeconomics on microfoundations is just a form of bootstrapping.

If there is widespread unemployment, it may indeed be that wages are too high, and that a reduction in wages would restore equilibrium. But there is no general presumption that unemployment will be cured by a reduction in wages. Unemployment may be the result of a more general dysfunction in which all prices are away from their equilibrium levels, in which case no adjustment of the wage would solve the problem, so that there is no presumption that the current wage exceeds the full-equilibrium wage. This, by the way, seems to me to be nothing more than a straightforward implication of the Lipsey-Lancaster theory of second best.

Paul Krugman and Roger Farmer on Sticky Wages

I was pleasantly surprised last Friday to see that Paul Krugman took favorable notice of my post about sticky wages, but also registering some disagreement.

[Glasner] is partially right in suggesting that there has been a bit of a role reversal regarding the role of sticky wages in recessions: Keynes asserted that wage flexibility would not help, but Keynes’s self-proclaimed heirs ended up putting downward nominal wage rigidity at the core of their analysis. By the way, this didn’t start with the New Keynesians; way back in the 1940s Franco Modigliani had already taught us to think that everything depended on M/w, the ratio of the money supply to the wage rate.

That said, wage stickiness plays a bigger role in The General Theory — and in modern discussions that are consistent with what Keynes said — than Glasner indicates.

To document his assertion about Keynes, Krugman quotes a passage from the General Theory in which Keynes seems to suggest that in the nineteenth century inflexible wages were partially compensated for by price level movements. One might quibble with Krugman’s interpretation, but the payoff doesn’t seem worth the effort.

But I will quibble with the next paragraph in Krugman’s post.

But there’s another point: even if you don’t think wage flexibility would help in our current situation (and like Keynes, I think it wouldn’t), Keynesians still need a sticky-wage story to make the facts consistent with involuntary unemployment. For if wages were flexible, an excess supply of labor should be reflected in ever-falling wages. If you want to say that we have lots of willing workers unable to find jobs — as opposed to moochers not really seeking work because they’re cradled in Paul Ryan’s hammock — you have to have a story about why wages aren’t falling.

Not that I really disagree with Krugman that the behavior of wages since the 2008 downturn is consistent with some stickiness in wages. Nevertheless, it is still not necessarily the case that, if wages were flexible, an excess supply of labor would lead to ever-falling wages. In a search model of unemployment, if workers are expecting wages to rise every year at a 3% rate, and instead wages rise at only a 1% rate, the model predicts that unemployment will rise, and will continue to rise (or at least not return to the natural rate) as long as observed wages did not increase as fast as workers were expecting wages to rise. Presumably over time, wage expectations would adjust to a new lower rate of increase, but there is no guarantee that the transition would be speedy.

Krugman concludes:

So sticky wages are an important part of both old and new Keynesian analysis, not because wage cuts would help us, but simply to make sense of what we see.

My own view is actually a bit more guarded. I think that “sticky wages” is simply a name that we apply to a problematic phenomenon for ehich we still haven’t found a really satisfactory explanation for. Search models, for all their theoretical elegance, simply can’t explain the observed process by which unemployment rises during recessions, i.e., by layoffs and a lack of job openings rather than an increase in quits and refused offers, as search models imply. The suggestion in my earlier post was intended to offer a possible basis of understanding what the phrase “sticky wages” is actually describing.

Roger Farmer, a long-time and renowned UCLA economist, also commented on my post on his new blog. Welcome to the blogosphere, Roger.

Roger has a different take on the sticky-wage phenomenon. Roger argues, as did some of the commenters to my post, that wages are not sticky. To document this assertion, Roger presents a diagram showing that the decline of nominal wages closely tracked that of prices for the first six years of the Great Depression. From this evidence Roger concludes that nominal wage rigidity is not the cause of rising unemployment during the Great Depression, and presumably, not the cause of rising unemployment in the Little Depression.

farmer_sticky_wagesInstead, Roger argues, the rise in unemployment was caused by an outbreak of self-fulfilling pessimism. Roger believes that there are many alternative equilibria and which equilibrium (actually equilibrium time path) we reach depends on what our expectations are. Roger also believes that our expectations are rational, so that we get what we expect, as he succinctly phrases it “beliefs are fundamental.” I have a lot of sympathy for this way of looking at the economy. In fact one of the early posts on this blog was entitled “Expectations are Fundamental.” But as I have explained in other posts, I am not so sure that expectations are rational in any useful sense, because I think that individual expectations diverge. I don’t think that there is a single way of looking at reality. If there are many potential equilibria, why should everyone expect the same equilibrium. I can be an optimist, and you can be a pessimist. If we agreed, we would be right, but if we disagree, we will both be wrong. What economic mechanism is there to reconcile our expectations? In a world in which expectations diverge — a world of temporary equilibrium — there can be cumulative output reductions that get propagated across the economy as each sector fails to produce its maximum potential output, thereby reducing the demand for the output of other sectors to which it is linked. That’s what happens when there is trading at prices that don’t correspond to the full optimum equilibrium solution.

So I agree with Roger in part, but I think that the coordination problem is (at least potentially) more serious than he imagines.

Now We Know: Ethanol Caused the 2008 Financial Crisis and the Little Depression

In the latest issue of the Journal of Economic Perspectives, now freely available here, Brian Wright, an economist at the University of California, Berkeley, has a great article, summarizing his research (with various co-authors including, H Bobenrieth, H. Bobenrieth, and R. A. Juan) into the behavior of commodity markets, especially for wheat, rice and corn. Seemingly anomalous price movements in those markets – especially the sharp increase in prices since 2004 — have defied explanation. But Wright et al. have now shown that the anomalies can be explained by taking into account both the role of grain storage and the substitutability between these staples as caloric sources. With their improved modeling techniques, Wright and his co-authors have shown that the seemingly unexplained and sustained increase in world grain prices after 2005 “are best explained by the new policies causing a sustained surge in demand for biofuels.” Here is the abstract of Wright’s article.

In the last half-decade, sharp jumps in the prices of wheat, rice, and corn, which furnish about two-thirds of the calorie requirements of mankind, have attracted worldwide attention. These price jumps in grains have also revealed the chaotic state of economic analysis of agricultural commodity markets. Economists and scientists have engaged in a blame game, apportioning percentages of responsibility for the price spikes to bewildering lists of factors, which include a surge in meat consumption, idiosyncratic regional droughts and fires, speculative bubbles, a new “financialization” of grain markets, the slowdown of global agricultural research spending, jumps in costs of energy, and more. Several observers have claimed to identify a “perfect storm” in the grain markets in 2007/2008, a confluence of some of the factors listed above. In fact, the price jumps since 2005 are best explained by the new policies causing a sustained surge in demand for biofuels. The rises in food prices since 2004 have generated huge wealth transfers to global landholders, agricultural input suppliers, and biofuels producers. The losers have been net consumers of food, including large numbers of the world’s poorest peoples. The cause of this large global redistribution was no perfect storm. Far from being a natural catastrophe, it was the result of new policies to allow and require increased use of grain and oilseed for production of biofuels. Leading this trend were the wealthy countries, initially misinformed about the true global environmental and distributional implications.

This conclusion, standing alone, is a devastating indictment of the biofuels policies of the last decade that have immiserated much of the developing world and many of the poorest in the developed world for the benefit of a small group of wealthy landowners and biofuels rent seekers. But the research of Wright et al. shows definitively that the runup in commodities prices after 2005 was driven by a concerted policy of intervention in commodities markets, with the fervent support of many faux free-market conservatives serving the interests of big donors, aimed at substituting biofuels for fossil fuels by mandating the use of biofuels like ethanol.

What does this have to do with the financial crisis of 2008? Simple. As Scott Sumner, Robert Hetzel, and a number of others (see, e.g., here) have documented, the Federal Open Market Committee, after reducing its Fed Funds target rates to 2% in March 2008 in the early stages of the downturn that started in December 2007, refused for seven months to further reduce the Fed Funds target because the Fed, disregarding or unaware of a rapidly worsening contraction in output and employment in the third quarter of 2008. Why did the Fed ignore or overlook a rapidly worsening economy for most of 2008 — even for three full weeks after the Lehman debacle? Because the Fed was focused like a laser on rapidly rising commodities prices, fearing that inflation expectations were about to become unanchored – even as inflation expectations were collapsing in the summer of 2008. But now, thanks to Wright et al., we know that rising commodities prices had nothing to do with monetary policy, but were caused by an ethanol mandate that enjoyed the bipartisan support of the Bush administration, Congressional Democrats and Congressional Republicans. Ah the joy of bipartisanship.

Why Are Wages Sticky?

The stickiness of wages seems to be one of the key stylized facts of economics. For some reason, the idea that sticky wages may be the key to explaining business-cycle downturns in which output and employment– not just prices and nominal incomes — fall is now widely supposed to have been a, if not the, major theoretical contribution of Keynes in the General Theory. The association between sticky wages and Keynes is a rather startling, and altogether unfounded, inversion of what Keynes actually wrote in the General Theory, heaping scorn on what he called the “classical” doctrine that cyclical (or in Keynesian terminology “involuntary”) unemployment could be attributed to the failure of nominal wages to fall in response to a reduction in aggregate demand. Keynes never stopped insisting that the key defining characteristic of “involuntary” unemployment is that a nominal-wage reduction would not reduce “involuntary” unemployment. The very definition of involuntary unemployment is that it can only be eliminated by an increase in the price level, but not by a reduction in nominal wages.

Keynes devoted three entire chapters (19-21) in the General Theory to making, and mathematically proving, that argument. Insofar as I understand it, his argument doesn’t seem to me to be entirely convincing, because, among other reasons, his reasoning seems to involve implicit comparative-statics exercises that start from a disequlibrium situation, but that is definitely a topic for another post. My point is simply that the sticky-wages explanation for unemployment was exactly the “classical” explanation that Keynes was railing against in the General Theory.

So it’s really quite astonishing — and amusing — to observe that, in the current upside-down world of modern macroeconomics, what differentiates New Classical from New Keynesian macroeconomists is that macroecoomists of the New Classical variety, dismissing wage stickiness as non-existent or empirically unimportant, assume that cyclical fluctuations in employment result from high rates of intertemporal substitution by labor in response to fluctuations in labor productivity, while macroeconomists of the New Keynesian variety argue that it is nominal-wage stickiness that prevents the steep cuts in nominal wages required to maintain employment in the face of exogenous shocks in aggregate demand or supply. New Classical and New Keynesian indeed! David Laidler and Axel Leijonhufvud have both remarked on this role reversal.

Many possible causes of nominal-wage stickiness (especially in the downward direction) have been advanced. For most of the twentieth century, wage stickiness was blamed on various forms of government intervention, e.g., pro-union legislation conferring monopoly privileges on unions, as well as other forms of wage-fixing like minimum-wage laws and even unemployment insurance. Whatever the merits of these criticisms, it is hard to credit claims that wage stickiness is mainly attributable to labor-market intervention on the side of labor unions. First, the phenomenon of wage stickiness was noted and remarked upon by economists as long ago as the early nineteenth century (e.g., Henry Thornton in his classic The Nature and Effects of the Paper Credit of Great Britain) long before the enactment of pro-union legislation. Second, the repeal or weakening of pro-union legislation since the 1980s does not seem to have been associated with any significant reduction in nominal-wage stickiness.

Since the 1970s, a number of more sophisticated explanations of wage stickiness have been advanced, for example search theories coupled with incorrect price-level expectations, long-term labor contracts, implicit contracts, and efficiency wages. Search theories locate the cause of wage nominal stickiness in workers’ decisions about what wage offers to accept. Thus, the apparent downward stickiness of wages in a recession seems to imply that workers are turning down offers of employment or quitting their jobs in the mistaken expectation that search will uncover better offers, but that doesn’t seem to be what happens in recessions, when quits decline and layoffs increase. Long-term contracts can and frequently are renegotiated when conditions change. Implicit contracts also can be adjusted when conditions change. So insofar as these theories posit that workers are somehow making decisions that lead to their unemployment, the story seems to be incomplete. If workers could be made better off by accepting reduced wages instead of being unemployed, why isn’t it happening?

Efficiency wages posit a different cause for wage stickiness: that employers have cleverly discovered that by overpaying workers, workers will work their backsides off to continue to be considered worthy of receiving the rents that their employers are conferring upon them. Thus, when a recession hits, employers use the opportunity to weed out their least deserving employees. This theory at least has the virtue of not assigning responsibility for sub-optimal decisions to the workers.

All of these theories were powerfully challenged about eleven or twelve years ago by Truman Bewley in a book Why Wages Don’t Fall During a Recession. (See also Peter Howitt’s excellent review of Bewely’s book in the Journal of Economic Literature.) Bewley, though an accomplished theorist, simply went out and interviewed lots of business people, asking them to explain why they didn’t cut wages to their employees in recessions rather than lay off workers. Overwhelmingly, the responses Bewley received did not correspond to any of the standard theories of wage-stickiness. Instead, business people explained wage stickiness as necessary to avoid a collapse of morale among their employees. Layoffs also hurt morale, but the workers that are retained get over it, and those let go are no longer around to hurt the morale of those that stay.

While I have always preferred the search explanation for apparent wage stickiness, which was largely developed at UCLA in the 1960s (see Armen Alchian’s classic “Information costs, Pricing, and Resource Unemployment”), I recognize that it doesn’t seem to account for the basic facts of the cyclical pattern of layoffs and quits. So I think that it is clear that wage stickiness remains a problematic phenomenon. I don’t claim to have a good explanation to offer, but it does seem to me that an important element of an explanation may have been left out so far — at least I can’t recall having seen it mentioned.

Let’s think about it in the following way. Consider the incentive to cut price of a firm that can’t sell as much as it wants at the current price. The firm is off its supply curve. The firm is a price taker in the sense that, if it charges a higher price than its competitors, it won’t sell anything, losing all its sales to competitors. Would the firm have any incentive to cut its price? Presumably, yes. But let’s think about that incentive. Suppose the firm has a maximum output capacity of one unit, and can produce either zero or one units in any time period. Suppose that demand has gone down, so that the firm is not sure if it will be able to sell the unit of output that it produces (assume also that the firm only produces if it has an order in hand). Would such a firm have an incentive to cut price? Only if it felt that, by doing so, it would increase the probability of getting an order sufficiently to compensate for the reduced profit margin at the lower price. Of course, the firm does not want to set a price higher than its competitors, so it will set a price no higher than the price that it expects its competitors to set.

Now consider a different sort of firm, a firm that can easily expand its output. Faced with the prospect of losing its current sales, this type of firm, unlike the first type, could offer to sell an increased amount at a reduced price. How could it sell an increased amount when demand is falling? By undercutting its competitors. A firm willing to cut its price could, by taking share away from its competitors, actually expand its output despite overall falling demand. That is the essence of competitive rivalry. Obviously, not every firm could succeed in such a strategy, but some firms, presumably those with a cost advantage, or a willingness to accept a reduced profit margin, could expand, thereby forcing marginal firms out of the market.

Workers seem to me to have the characteristics of type-one firms, while most actual businesses seem to resemble type-two firms. So what I am suggesting is that the inability of workers to take over the jobs of co-workers (the analog of output expansion by a firm) when faced with the prospect of a layoff means that a powerful incentive operating in non-labor markets for price cutting in response to reduced demand is not present in labor markets. A firm faced with the prospect of being terminated by a customer whose demand for the firm’s product has fallen may offer significant concessions to retain the customer’s business, especially if it can, in the process, gain an increased share of the customer’s business. A worker facing the prospect of a layoff cannot offer his employer a similar deal. And requiring a workforce of many workers, the employer cannot generally avoid the morale-damaging effects of a wage cut on his workforce by replacing current workers with another set of workers at a lower wage than the old workers were getting. So the point that I am suggesting seems to dovetail with morale-preserving explanation for wage-stickiness offered by Bewley.

If I am correct, then the incentive for price cutting is greater in markets for most goods and services than in markets for labor employment. This was Henry Thornton’s observation over two centuries ago when he wrote that it was a well-known fact that wages are more resistant than other prices to downward pressure in periods of weak demand. And if that is true, then it suggests that real wages tend to fluctuate countercyclically, which seems to be a stylized fact of business cycles, though whether that is indeed a fact remains controversial.

Big Ideas in Macroeconomics: A Review

Steve Williamson recently plugged a new book by Kartik Athreya (Big Ideas in Macroeconomics), an economist at the Federal Reserve Bank of Richmond, which tries to explain in relatively non-technical terms what modern macroeconomics is all about. I will acknowledge that my graduate training in macroeconomics predated the rise of modern macro, and I am not fluent in the language of modern macro, though I am trying to fill in the gaps. And this book is a good place to start. I found Athreya’s book a good overview of the field, explaining the fundamental ideas and how they fit together.

Big Ideas in Macroeconomics is a moderately big book, 415 pages, covering a very wide range of topics. It is noteworthy, I think, that despite its size, there is so little overlap between the topics covered in this book, and those covered in more traditional, perhaps old-fashioned, books on macroeconomics. The index contains not a single entry on the price level, inflation, deflation, money, interest, total output, employment or unemployment. Which is not to say that none of those concepts are ever mentioned or discussed, just that they are not treated, as they are in traditional macroeconomics books, as the principal objects of macroeconomic inquiry. The conduct of monetary or fiscal policy to achieve some explicit macroeconomic objective is never discussed. In contrast, there are repeated references to Walrasian equilibrium, the Arrow-Debreu-McKenzie model, the Radner model, Nash-equilibria, Pareto optimality, the first and second Welfare theorems. It’s a new world.

The first two chapters present a fairly detailed description of the idea of Walrasian general equilibrium and its modern incarnation in the canonical Arrow-Debreu-McKenzie (ADM) model.The ADM model describes an economy of utility-maximizing households and profit-maximizing firms engaged in the production and consumption of commodities through time and space. There are markets for commodities dated by time period, specified by location and classified by foreseeable contingent states of the world, so that the same physical commodity corresponds to many separate commodities, each corresponding to different time periods and locations and to contingent states of the world. Prices for such physically identical commodities are not necessarily uniform across times, locations or contingent states.The demand for road salt to de-ice roads depends on whether conditions, which depend on time and location and on states of the world. For each different possible weather contingency, there would be a distinct market for road salt for each location and time period.

The ADM model is solved once for all time periods and all states of the world. Under appropriate conditions, there is one (and possibly more than one) intertemporal equilibrium, all trades being executed in advance, with all deliveries subsequently being carried out, as time an contingencies unfold, in accordance with the terms of the original contracts.

Given the existence of an equilibrium, i.e., a set of prices subject to which all agents are individually optimizing, and all markets are clearing, there are two classical welfare theorems stating that any such equilibrium involves a Pareto-optimal allocation and any Pareto-optimal allocation could be supported by an equilibrium set of prices corresponding to a suitably chosen set of initial endowments. For these optimality results to obtain, it is necessary that markets be complete in the sense that there is a market for each commodity in each time period and contingent state of the world. Without a complete set of markets in this sense, the Pareto-optimality of the Walrasian equilibrium cannot be proved.

Readers may wonder about the process by which an equilibrium price vector would actually be found through some trading process. Athreya invokes the fiction of a Walrasian clearinghouse in which all agents (truthfully) register their notional demands and supplies at alternative price vectors. Based on these responses the clearinghouse is able to determine, by a process of trial and error, the equilibrium price vector. Since the Walrasian clearinghouse presumes that no trading occurs except at an equilibrium price vector, there can be no assurance that an equilibrium price vector would ever be arrived at under an actual trading process in which trading occurs at disequilibrium prices. Moreover, as Clower and Leijonhufvud showed over 40 years ago (“Say’s Principle: What it Means and What it Doesn’t Mean”), trading at disequilibrium prices may cause cumulative contractions of aggregate demand because the total volume of trade at a disequilibrium price will always be less than the volume of trade at an equilibrium price, the volume of trade being constrained by the lesser of quantity supplied and quantity demanded.

In the view of modern macroeconomics, then, Walrasian general equilibrium, as characterized by the ADM model, is the basic and overarching paradigm of macroeconomic analysis. To be sure, modern macroeconomics tries to go beyond the highly restrictive assumptions of the ADM model, but it is not clear whether the concessions made by modern macroeconomics to the real world go very far in enhancing the realism of the basic model.

Chapter 3, contains some interesting reflections on the importance of efficiency (Pareto-optimality) as a policy objective and on the trade-offs between efficiency and equity and between ex-ante and ex-post efficiency. But these topics are on the periphery of macroeconomics, so I will offer no comment here.

In chapter 4, Athreya turns to some common criticisms of modern macroeconomics: that it is too highly aggregated, too wedded to the rationality assumption, too focused on equilibrium steady states, and too highly mathematical. Athreya correctly points out that older macroeconomic models were also highly aggregated, so that if aggregation is a problem it is not unique to modern macroeconomics. That’s a fair point, but skirts some thorny issues. As Athreya acknowledges in chapter 5, an important issue separating certain older macroeconomic traditions (both Keynesian and Austrian among others) is the idea that macroeconomic dysfunction is a manifestation of coordination failure. It is a property – a remarkable property – of Walrasian general equilibrium that it achieves perfect (i.e., Pareto-optimal) coordination of disparate, self-interested, competitive individual agents, fully reconciling their plans in a way that might have been achieved by an omniscient and benevolent central planner. Walrasian general equilibrium fully solves the coordination problem. Insofar as important results of modern macroeconomics depend on the assumption that a real-life economy can be realistically characterized as a Walrasian equilibrium, modern macroeconomics is assuming that coordination failures are irrelevant to macroeconomics. It is only after coordination failures have been excluded from the purview of macroeconomics that it became legitimate (for the sake of mathematical tractability) to deploy representative-agent models in macroeconomics, a coordination failure being tantamount, in the context of a representative agent model, to a form of irrationality on the part of the representative agent. Athreya characterizes choices about the level of aggregation as a trade-off between realism and tractability, but it seems to me that, rather than making a trade-off between realism and tractability, modern macroeconomics has simply made an a priori decision that coordination problems are not a relevant macroeconomic concern.

A similar argument applies to Athreya’s defense of rational expectations and the use of equilibrium in modern macroeconomic models. I would not deny that there are good reasons to adopt rational expectations and full equilibrium in some modeling situations, depending on the problem that theorist is trying to address. The question is whether it can be appropriate to deviate from the assumption of a full rational-expectations equilibrium for the purposes of modeling fluctuations over the course of a business cycle, especially a deep cyclical downturn. In particular, the idea of a Hicksian temporary equilibrium in which agents hold divergent expectations about future prices, but markets clear period by period given those divergent expectations, seems to offer (as in, e.g., Thompson’s “Reformulation of Macroeconomic Theory“) more realism and richer empirical content than modern macromodels of rational expectations.

Athreya offers the following explanation and defense of rational expectations:

[Rational expectations] purports to explain the expectations people actually have about the relevant items in their own futures. It does so by asking that their expectations lead to economy-wide outcomes that do not contradict their views. By imposing the requirement that expectations not be systematically contradicted by outcomes, economists keep an unobservable object from becoming a source of “free parameters” through which we can cheaply claim to have “explained” some phenomenon. In other words, in rational-expectations models, expectations are part of what is solved for, and so they are not left to the discretion of the modeler to impose willy-nilly. In so doing, the assumption of rational expectations protects the public from economists.

This defense of rational expectations plainly belies betrays the methodological arrogance of modern macroeconomics. I am all in favor of solving a model for equilibrium expectations, but solving for equilibrium expectations is certainly not the same as insisting that the only interesting or relevant result of a model is the one generated by the assumption of full equilibrium under rational expectations. (Again see Thompson’s “Reformulation of Macroeconomic Theory” as well as the classic paper by Foley and Sidrauski, and this post by Rajiv Sethi on his blog.) It may be relevant and useful to look at a model and examine its properties in a state in which agents hold inconsistent expectations about future prices; the temporary equilibrium existing at a point in time does not correspond to a steady state. Why is such an equilibrium uninteresting and uninformative about what happens in a business cycle? But evidently modern macroeconomists such as Athreya consider it their duty to ban such models from polite discourse — certainly from the leading economics journals — lest the public be tainted by economists who might otherwise dare to abuse their models by making illicit assumptions about expectations formation and equilibrium concepts.

Chapter 5 is the most important chapter of the book. It is in this chapter that Athreya examines in more detail the kinds of adjustments that modern macroeconomists make in the Walrasian/ADM paradigm to accommodate the incompleteness of markets and the imperfections of expectation formation that limit the empirical relevance of the full ADM model as a macroeconomic paradigm. To do so, Athreya starts by explaining how the Radner model in which a less than the full complement of Arrow-Debreu contingent-laims markets is available. In the Radner model, unlike the ADM model, trading takes place through time for those markets that actually exist, so that the full Walrasian equilibrium exists only if agents are able to form correct expectations about future prices. And even if the full Walrasian equilibrium exists, in the absence of a complete set of Arrow-Debreu markets, the classical welfare theorems may not obtain.

To Athreya, these limitations on the Radner version of the Walrasian model seem manageable. After all, if no one really knows how to improve on the equilibrium of the Radner model, the potential existence of Pareto improvements to the Radner equilibrium is not necessarily that big a deal. Athreya expands on the discussion of the Radner model by introducing the neoclassical growth model in both its deterministic and stochastic versions, all the elements of the dynamic stochastic general equilibrium (DSGE) model that characterizes modern macroeconomics now being in place. Athreya closes out the chapter with additional discussions of the role of further modifications to the basic Walrasian paradigm, particularly search models and overlapping-generations models.

I found the discussion in chapter 5 highly informative and useful, but it doesn’t seem to me that Athreya faces up to the limitations of the Radner model or to the implied disconnect between the Walraisan paradigm and macroeconomic analysis. A full Walrasian equilibrium exists in the Radner model only if all agents correctly anticipate future prices. If they don’t correctly anticipate future prices, then we are in the world of Hicksian temporary equilibrium. But in that world, the kind of coordination failures that Athreya so casually dismisses seem all too likely to occur. In a world of temporary equilibrium, there is no guarantee that intertemporal budget constraints will be effective, because those budget constraint reflect expected, not actual, future prices, and, in temporary equilibrium, expected prices are not the same for all transactors. Budget constraints are not binding in a world in which trading takes place through time based on possibly incorrect expectations of future prices. Not only does this mean that all the standard equilibrium and optimality conditions of Walrasian theory are violated, but that defaults on IOUs and, thus, financial-market breakdowns, are entirely possible.

In a key passage in chapter 5, Athreya dismisses coordination-failure explanations, invidiously characterized as Keynesian, for inefficient declines in output and employment. While acknowledging that such fluctuations could, in theory, be caused by “self-fulfilling pessimism or fear,” Athreya invokes the benchmark Radner trading arrangement of the ADM model. “In the Radner economy, Athreya writes, “households and firms have correct expectations for the spot market prices one period hence.” The justification for that expectational assumption, which seems indistinguishable from the assumption of a full, rational-expectations equilibrium, is left unstated. Athreya continues:

Granting that they indeed have such expectations, we can now ask about the extent to which, in a modern economy, we can have outcomes that are extremely sensitive to them. In particular, is it the case that under fairly plausible conditions, “optimism” and “pessimism” can be self-fulfilling in ways that make everyone (or nearly everyone) better off in the former than the latter?

Athreya argues that this is possible only if the aggregate production function of the economy is characterized by increasing returns to scale, so that productivity increases as output rises.

[W]hat I have in mind is that the structure of the economy must be such that when, for example, all households suddenly defer consumption spending (and save instead), interest rates do not adjust rapidly to forestall such a fall in spending by encouraging firms to invest.

Notice that Athreya makes no distinction between a reduction in consumption in which people shift into long-term real or financial assets and one in which people shift into holding cash. The two cases are hardly identical, but Athreya has nothing to say about the demand for money and its role in macroeconomics.

If they did, under what I will later describe as a “standard” production side for the economy, wages would, barring any countervailing forces, promptly rise (as the capital stock rises and makes workers more productive). In turn, output would not fall in response to pessimism.

What Athreya is saying is that if we assume that there is a reduction in the time preference of households, causing them to defer present consumption in order to increase their future consumption, the shift in time preference should be reflected in a rise in asset prices, causing an increase in the production of durable assets, and leading to an increase in wages insofar as the increase in the stock of fixed capital implies an increase in the marginal product of labor. Thus, if all the consequences of increased thrift are foreseen at the moment that current demand for output falls, there would be a smooth transition from the previous steady state corresponding to a high rate of time preference to the new steady state corresponding to a low rate of time preference.

Fine. If you assume that the economy always remains in full equilibrium, even in the transition from one steady state to another, because everyone has rational expectations, you will avoid a lot of unpleasantness. But what if entrepreneurial expectations do not change instantaneously, and the reduction in current demand for output corresponding to reduced spending on consumption causes entrepreneurs to reduce, not increase, their demand for capital equipment? If, after the shift in time preference, total spending actually falls, there may be a chain of disappointments in expectations, and a series of defaults on IOUs, culminating in a financial crisis. Pessimism may indeed be self-fulfilling. But Athreya has a just-so story to tell, and he seems satisfied that there is no other story to be told. Others may not be so easily satisfied, especially when his just-so story depends on a) the rational expectations assumption that many smart people have a hard time accepting as even remotely plausible, and b) the assumption that no trading takes place at disequilibrium prices. Athreya continues:

Thus, at least within the context of models in which households and firms are not routinely incorrect about the future, multiple self-fulfilling outcomes require particular features of the production side of the economy to prevail.

Actually what Athreya should have said is: “within the context of models in which households and firms always predict future prices correctly.”

In chapter 6, Athreya discusses how modern macroeconomics can and has contributed to the understanding of the financial crisis of 2007-08 and the subsequent downturn and anemic recovery. There is a lot of very useful information and discussion of various issues, especially in connection with banking and financial markets. But further comment at this point would be largely repetitive.

Anyway, despite my obvious and strong disagreements with much of what I read, I learned a lot from Athreya’s well-written and stimulating book, and I actually enjoyed reading it.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,263 other subscribers
Follow Uneasy Money on WordPress.com