Archive for the 'rational expectations' Category

Romer v. Lucas

A couple of months ago, Paul Romer created a stir by publishing a paper in the American Economic Review “Mathiness in the Theory of Economic Growth,” an attack on two papers, one by McGrattan and Prescott and the other by Lucas and Moll on aspects of growth theory. He accused the authors of those papers of using mathematical modeling as a cover behind which to hide assumptions guaranteeing results by which the authors could promote their research agendas. In subsequent blog posts, Romer has sharpened his attack, focusing it more directly on Lucas, whom he accuses of a non-scientific attachment to ideological predispositions that have led him to violate what he calls Feynman integrity, a concept eloquently described by Feynman himself in a 1974 commencement address at Caltech.

It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked–to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can–if you know anything at all wrong, or possibly wrong–to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

Romer contrasts this admirable statement of what scientific integrity means with another by George Stigler, seemingly justifying, or at least excusing, a kind of special pleading on behalf of one’s own theory. And the institutional and perhaps ideological association between Stigler and Lucas seems to suggest that Lucas is inclined to follow the permissive and flexible Stiglerian ethic rather than rigorous Feynman standard of scientific integrity. Romer regards this as a breach of the scientific method and a step backward for economics as a science.

I am not going to comment on the specific infraction that Romer accuses Lucas of having committed; I am not familiar with the mathematical question in dispute. Certainly if Lucas was aware that his argument in the paper Romer criticizes depended on the particular mathematical assumption in question, Lucas should have acknowledged that to be the case. And even if, as Lucas asserted in responding to a direct question by Romer, he could have derived the result in a more roundabout way, then he should have pointed that out, too. However, I don’t regard the infraction alleged by Romer to be more than a misdemeanor, hardly a scandalous breach of the scientific method.

Why did Lucas, who as far as I can tell was originally guided by Feynman integrity, switch to the mode of Stigler conviction? Market clearing did not have to evolve from auxiliary hypothesis to dogma that could not be questioned.

My conjecture is economists let small accidents of intellectual history matter too much. If we had behaved like scientists, things could have turned out very differently. It is worth paying attention to these accidents because doing so might let us take more control over the process of scientific inquiry that we are engaged in. At the very least, we should try to reduce the odds that that personal frictions and simple misunderstandings could once again cause us to veer off on some damaging trajectory.

I suspect that it was personal friction and a misunderstanding that encouraged a turn toward isolation (or if you prefer, epistemic closure) by Lucas and colleagues. They circled the wagons because they thought that this was the only way to keep the rational expectations revolution alive. The misunderstanding is that Lucas and his colleagues interpreted the hostile reaction they received from such economists as Robert Solow to mean that they were facing implacable, unreasoning resistance from such departments as MIT. In fact, in a remarkably short period of time, rational expectations completely conquered the PhD program at MIT.

More recently Romer, having done graduate work both at MIT and Chicago in the late 1970s, has elaborated on the personal friction between Solow and Lucas and how that friction may have affected Lucas, causing him to disengage from the professional mainstream. Paul Krugman, who was at MIT when this nastiness was happening, is skeptical of Romer’s interpretation.

My own view is that being personally and emotionally attached to one’s own theories, whether for religious or ideological or other non-scientific reasons, is not necessarily a bad thing as long as there are social mechanisms allowing scientists with different scientific viewpoints an opportunity to make themselves heard. If there are such mechanisms, the need for Feynman integrity is minimized, because individual lapses of integrity will be exposed and remedied by criticism from other scientists; scientific progress is possible even if scientists don’t live up to the Feynman standards, and maintain their faith in their theories despite contradictory evidence. But, as I am going to suggest below, there are reasons to doubt that social mechanisms have been operating to discipline – not suppress, just discipline – dubious economic theorizing.

My favorite example of the importance of personal belief in, and commitment to the truth of, one’s own theories is Galileo. As discussed by T. S. Kuhn in The Structure of Scientific Revolutions. Galileo was arguing for a paradigm change in how to think about the universe, despite being confronted by empirical evidence that appeared to refute the Copernican worldview he believed in: the observations that the sun revolves around the earth, and that the earth, as we directly perceive it, is, apart from the occasional earthquake, totally stationary — good old terra firma. Despite that apparently contradictory evidence, Galileo had an alternative vision of the universe in which the obvious movement of the sun in the heavens was explained by the spinning of the earth on its axis, and the stationarity of the earth by the assumption that all our surroundings move along with the earth, rendering its motion imperceptible, our perception of motion being relative to a specific frame of reference.

At bottom, this was an almost metaphysical world view not directly refutable by any simple empirical test. But Galileo adopted this worldview or paradigm, because he deeply believed it to be true, and was therefore willing to defend it at great personal cost, refusing to recant his Copernican view when he could have easily appeased the Church by describing the Copernican theory as just a tool for predicting planetary motion rather than an actual representation of reality. Early empirical tests did not support heliocentrism over geocentrism, but Galileo had faith that theoretical advancements and improved measurements would eventually vindicate the Copernican theory. He was right of course, but strict empiricism would have led to a premature rejection of heliocentrism. Without a deep personal commitment to the Copernican worldview, Galileo might not have articulated the case for heliocentrism as persuasively as he did, and acceptance of heliocentrism might have been delayed for a long time.

Imre Lakatos called such deeply-held views underlying a scientific theory the hard core of the theory (aka scientific research program), a set of beliefs that are maintained despite apparent empirical refutation. The response to any empirical refutation is not to abandon or change the hard core but to adjust what Lakatos called the protective belt of the theory. Eventually, as refutations or empirical anomalies accumulate, the research program may undergo a crisis, leading to its abandonment, or it may simply degenerate if it fails to solve new problems or discover any new empirical facts or regularities. So Romer’s criticism of Lucas’s dogmatic attachment to market clearing – Lucas frequently makes use of ad hoc price stickiness assumptions; I don’t know why Romer identifies market-clearing as a Lucasian dogma — may be no more justified from a history of science perspective than would criticism of Galileo’s dogmatic attachment to heliocentrism.

So while I have many problems with Lucas, lack of Feynman integrity is not really one of them, certainly not in the top ten. What I find more disturbing is his narrow conception of what economics is. As he himself wrote in an autobiographical sketch for Lives of the Laureates, he was bewitched by the beauty and power of Samuelson’s Foundations of Economic Analysis when he read it the summer before starting his training as a graduate student at Chicago in 1960. Although it did not have the transformative effect on me that it had on Lucas, I greatly admire the Foundations, but regardless of whether Samuelson himself meant to suggest such an idea (which I doubt), it is absurd to draw this conclusion from it:

I loved the Foundations. Like so many others in my cohort, I internalized its view that if I couldn’t formulate a problem in economic theory mathematically, I didn’t know what I was doing. I came to the position that mathematical analysis is not one of many ways of doing economic theory: It is the only way. Economic theory is mathematical analysis. Everything else is just pictures and talk.

Oh, come on. Would anyone ever think that unless you can formulate the problem of whether the earth revolves around the sun or the sun around the earth mathematically, you don’t know what you are doing? And, yet, remarkably, on the page following that silly assertion, one finds a totally brilliant description of what it was like to take graduate price theory from Milton Friedman.

Friedman rarely lectured. His class discussions were often structured as debates, with student opinions or newspaper quotes serving to introduce a problem and some loosely stated opinions about it. Then Friedman would lead us into a clear statement of the problem, considering alternative formulations as thoroughly as anyone in the class wanted to. Once formulated, the problem was quickly analyzed—usually diagrammatically—on the board. So we learned how to formulate a model, to think about and decide which features of a problem we could safely abstract from and which he needed to put at the center of the analysis. Here “model” is my term: It was not a term that Friedman liked or used. I think that for him talking about modeling would have detracted from the substantive seriousness of the inquiry we were engaged in, would divert us away from the attempt to discover “what can be done” into a merely mathematical exercise. [my emphasis].

Despite his respect for Friedman, it’s clear that Lucas did not adopt and internalize Friedman’s approach to economic problem solving, but instead internalized the caricature he extracted from Samuelson’s Foundations: that mathematical analysis is the only legitimate way of doing economic theory, and that, in particular, the essence of macroeconomics consists in a combination of axiomatic formalism and philosophical reductionism (microfoundationalism). For Lucas, the only scientifically legitimate macroeconomic models are those that can be deduced from the axiomatized Arrow-Debreu-McKenzie general equilibrium model, with solutions that can be computed and simulated in such a way that the simulations can be matched up against the available macroeconomics time series on output, investment and consumption.

This was both bad methodology and bad science, restricting the formulation of economic problems to those for which mathematical techniques are available to be deployed in finding solutions. On the one hand, the rational-expectations assumption made finding solutions to certain intertemporal models tractable; on the other, the assumption was justified as being required by the rationality assumptions of neoclassical price theory.

In a recent review of Lucas’s Collected Papers on Monetary Theory, Thomas Sargent makes a fascinating reference to Kenneth Arrow’s 1967 review of the first two volumes of Paul Samuelson’s Collected Works in which Arrow referred to the problematic nature of the neoclassical synthesis of which Samuelson was a chief exponent.

Samuelson has not addressed himself to one of the major scandals of current price theory, the relation between microeconomics and macroeconomics. Neoclassical microeconomic equilibrium with fully flexible prices presents a beautiful picture of the mutual articulations of a complex structure, full employment being one of its major elements. What is the relation between this world and either the real world with its recurrent tendencies to unemployment of labor, and indeed of capital goods, or the Keynesian world of underemployment equilibrium? The most explicit statement of Samuelson’s position that I can find is the following: “Neoclassical analysis permits of fully stable underemployment equilibrium only on the assumption of either friction or a peculiar concatenation of wealth-liquidity-interest elasticities. . . . [The neoclassical analysis] goes far beyond the primitive notion that, by definition of a Walrasian system, equilibrium must be at full employment.” . . .

In view of the Phillips curve concept in which Samuelson has elsewhere shown such interest, I take the second sentence in the above quotation to mean that wages are stationary whenever unemployment is X percent, with X positive; thus stationary unemployment is possible. In general, one can have a neoclassical model modified by some elements of price rigidity which will yield Keynesian-type implications. But such a model has yet to be constructed in full detail, and the question of why certain prices remain rigid becomes of first importance. . . . Certainly, as Keynes emphasized the rigidity of prices has something to do with the properties of money; and the integration of the demand and supply of money with general competitive equilibrium theory remains incomplete despite attempts beginning with Walras himself.

If the neoclassical model with full price flexibility were sufficiently unrealistic that stable unemployment equilibrium be possible, then in all likelihood the bulk of the theorems derived by Samuelson, myself, and everyone else from the neoclassical assumptions are also contrafactual. The problem is not resolved by what Samuelson has called “the neoclassical synthesis,” in which it is held that the achievement of full employment requires Keynesian intervention but that neoclassical theory is valid when full employment is reached. . . .

Obviously, I believe firmly that the mutual adjustment of prices and quantities represented by the neoclassical model is an important aspect of economic reality worthy of the serious analysis that has been bestowed on it; and certain dramatic historical episodes – most recently the reconversion of the United States from World War II and the postwar European recovery – suggest that an economic mechanism exists which is capable of adaptation to radical shifts in demand and supply conditions. On the other hand, the Great Depression and the problems of developing countries remind us dramatically that something beyond, but including, neoclassical theory is needed.

Perhaps in a future post, I may discuss this passage, including a few sentences that I have omitted here, in greater detail. For now I will just say that Arrow’s reference to a “neoclassical microeconomic equilibrium with fully flexible prices” seems very strange inasmuch as price flexibility has absolutely no role in the proofs of the existence of a competitive general equilibrium for which Arrow and Debreu and McKenzie are justly famous. All the theorems Arrow et al. proved about the neoclassical equilibrium were related to existence, uniqueness and optimaiity of an equilibrium supported by an equilibrium set of prices. Price flexibility was not involved in those theorems, because the theorems had nothing to do with how prices adjust in response to a disequilibrium situation. What makes this juxtaposition of neoclassical microeconomic equilibrium with fully flexible prices even more remarkable is that about eight years earlier Arrow wrote a paper (“Toward a Theory of Price Adjustment”) whose main concern was the lack of any theory of price adjustment in competitive equilibrium, about which I will have more to say below.

Sargent also quotes from two lectures in which Lucas referred to Don Patinkin’s treatise Money, Interest and Prices which provided perhaps the definitive statement of the neoclassical synthesis Samuelson espoused. In one lecture (“My Keynesian Education” presented to the History of Economics Society in 2003) Lucas explains why he thinks Patinkin’s book did not succeed in its goal of integrating value theory and monetary theory:

I think Patinkin was absolutely right to try and use general equilibrium theory to think about macroeconomic problems. Patinkin and I are both Walrasians, whatever that means. I don’t see how anybody can not be. It’s pure hindsight, but now I think that Patinkin’s problem was that he was a student of Lange’s, and Lange’s version of the Walrasian model was already archaic by the end of the 1950s. Arrow and Debreu and McKenzie had redone the whole theory in a clearer, more rigorous, and more flexible way. Patinkin’s book was a reworking of his Chicago thesis from the middle 1940s and had not benefited from this more recent work.

In the other lecture, his 2003 Presidential address to the American Economic Association, Lucas commented further on why Patinkin fell short in his quest to unify monetary and value theory:

When Don Patinkin gave his Money, Interest, and Prices the subtitle “An Integration of Monetary and Value Theory,” value theory meant, to him, a purely static theory of general equilibrium. Fluctuations in production and employment, due to monetary disturbances or to shocks of any other kind, were viewed as inducing disequilibrium adjustments, unrelated to anyone’s purposeful behavior, modeled with vast numbers of free parameters. For us, today, value theory refers to models of dynamic economies subject to unpredictable shocks, populated by agents who are good at processing information and making choices over time. The macroeconomic research I have discussed today makes essential use of value theory in this modern sense: formulating explicit models, computing solutions, comparing their behavior quantitatively to observed time series and other data sets. As a result, we are able to form a much sharper quantitative view of the potential of changes in policy to improve peoples’ lives than was possible a generation ago.

So, as Sargent observes, Lucas recreated an updated neoclassical synthesis of his own based on the intertemporal Arrow-Debreu-McKenzie version of the Walrasian model, augmented by a rationale for the holding of money and perhaps some form of monetary policy, via the assumption of credit-market frictions and sticky prices. Despite the repudiation of the updated neoclassical synthesis by his friend Edward Prescott, for whom monetary policy is irrelevant, Lucas clings to neoclassical synthesis 2.0. Sargent quotes this passage from Lucas’s 1994 retrospective review of A Monetary History of the US by Friedman and Schwartz to show how tightly Lucas clings to neoclassical synthesis 2.0 :

In Kydland and Prescott’s original model, and in many (though not all) of its descendants, the equilibrium allocation coincides with the optimal allocation: Fluctuations generated by the model represent an efficient response to unavoidable shocks to productivity. One may thus think of the model not as a positive theory suited to all historical time periods but as a normative benchmark providing a good approximation to events when monetary policy is conducted well and a bad approximation when it is not. Viewed in this way, the theory’s relative success in accounting for postwar experience can be interpreted as evidence that postwar monetary policy has resulted in near-efficient behavior, not as evidence that money doesn’t matter.

Indeed, the discipline of real business cycle theory has made it more difficult to defend real alternaltives to a monetary account of the 1930s than it was 30 years ago. It would be a term-paper-size exercise, for example, to work out the possible effects of the 1930 Smoot-Hawley Tariff in a suitably adapted real business cycle model. By now, we have accumulated enough quantitative experience with such models to be sure that the aggregate effects of such a policy (in an economy with a 5% foreign trade sector before the Act and perhaps a percentage point less after) would be trivial.

Nevertheless, in the absence of some catastrophic error in monetary policy, Lucas evidently believes that the key features of the Arrow-Debreu-McKenzie model are closely approximated in the real world. That may well be true. But if it is, Lucas has no real theory to explain why.

In his 1959 paper (“Towards a Theory of Price Adjustment”) I just mentioned, Arrow noted that the theory of competitive equilibrium has no explanation of how equilibrium prices are actually set. Indeed, the idea of competitive price adjustment is beset by a paradox: all agents in a general equilibrium being assumed to be price takers, how is it that a new equilibrium price is ever arrived at following any disturbance to an initial equilibrium? Arrow had no answer to the question, but offered the suggestion that, out of equilibrium, agents are not price takers, but price searchers, possessing some measure of market power to set price in the transition between the old and new equilibrium. But the upshot of Arrow’s discussion was that the problem and the paradox awaited solution. Almost sixty years on, some of us are still waiting, but for Lucas and the Lucasians, there is neither problem nor paradox, because the actual price is the equilibrium price, and the equilibrium price is always the (rationally) expected price.

If the social functions of science were being efficiently discharged, this rather obvious replacement of problem solving by question begging would not have escaped effective challenge and opposition. But Lucas was able to provide cover for this substitution by persuading the profession to embrace his microfoundational methodology, while offering irresistible opportunities for professional advancement to younger economists who could master the new analytical techniques that Lucas and others were rapidly introducing, thereby neutralizing or coopting many of the natural opponents to what became modern macroeconomics. So while Romer considers the conquest of MIT by the rational-expectations revolution, despite the opposition of Robert Solow, to be evidence for the advance of economic science, I regard it as a sign of the social failure of science to discipline a regressive development driven by the elevation of technique over substance.

Roger and Me

Last week Roger Farmer wrote a post elaborating on a comment that he had left to my post on Price Stickiness and Macroeconomics. Roger’s comment is aimed at this passage from my post:

[A]lthough price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

Here’s Roger’s comment:

I have a somewhat different take. I like Lucas’ insistence on equilibrium at every point in time as long as we recognize two facts. 1. There is a continuum of equilibria, both dynamic and steady state and 2. Almost all of them are Pareto suboptimal.

I made the following reply to Roger’s comment:

Roger, I think equilibrium at every point in time is ok if we distinguish between temporary and full equilibrium, but I don’t see how there can be a continuum of full equilibria when agents are making all kinds of long-term commitments by investing in specific capital. Having said that, I certainly agree with you that expectational shifts are very important in determining which equilibrium the economy winds up at.

To which Roger responded:

I am comfortable with temporary equilibrium as the guiding principle, as long as the equilibrium in each period is well defined. By that, I mean that, taking expectations as given in each period, each market clears according to some well defined principle. In classical models, that principle is the equality of demand and supply in a Walrasian auction. I do not think that is the right equilibrium concept.

Roger didn’t explain – at least not here, he probably has elsewhere — exactly why he doesn’t think equality of demand and supply in a Walrasian auction is not the right equilibrium concept. But I would be interested in hearing from him why he thinks equality of supply and demand is not the right equilibrium concept. Perhaps he will clarify his thinking for me.

Hicks wanted to separate ‘fix price markets’ from ‘flex price markets’. I don’t think that is the right equilibrium concept either. I prefer to use competitive search equilibrium for the labor market. Search equilibrium leads to indeterminacy because there are not enough prices for the inputs to the search process. Classical search theory closes that gap with an arbitrary Nash bargaining weight. I prefer to close it by making expectations fundamental [a proposition I have advanced on this blog].

I agree that the Hicksian distinction between fix-price markets and flex-price markets doesn’t cut it. Nevertheless, it’s not clear to me that a Thompsonian temporary-equilibrium model in which expectations determine the reservation wage at which workers will accept employment (i.e, the labor-supply curve conditional on the expected wage) doesn’t work as well as a competitive search equilibrium in this context.

Once one treats expectations as fundamental, there is no longer a multiplicity of equilibria. People act in a well defined way and prices clear markets. Of course ‘market clearing’ in a search market may involve unemployment that is considerably higher than the unemployment rate that would be chosen by a social planner. And when there is steady state indeterminacy, as there is in my work, shocks to beliefs may lead the economy to one of a continuum of steady state equilibria.

There is an equilibrium for each set of expectations (with the understanding, I presume, that expectations are always uniform across agents). The problem that I see with this is that there doesn’t seem to be any interaction between outcomes and expectations. Expectations are always self-fulfilling, and changes in expectations are purely exogenous. But in a classic downturn, the process seems to be cumulative, the contraction seemingly feeding on itself, causing a spiral of falling prices, declining output, rising unemployment, and increasing pessimism.

That brings me to the second part of an equilibrium concept. Are expectations rational in the sense that subjective probability measures over future outcomes coincide with realized probability measures? That is not a property of the real world. It is a consistency property for a model.

Yes; I agree totally. Rational expectations is best understood as a property of a model, the property being that if agents expect an equilibrium price vector the solution of the model is the same equilibrium price vector. It is not a substantive theory of expectation formation, the model doesn’t posit that agents correctly foresee the equilibrium price vector, that’s an extreme and unrealistic assumption about how the world actually works, IMHO. The distinction is crucial, but it seems to me that it is largely ignored in practice.

And yes: if we plop our agents down into a stationary environment, their beliefs should eventually coincide with reality.

This seems to me a plausible-sounding assumption for which there is no theoretical proof, and in view of Roger’s recent discussion of unit roots, dubious empirical support.

If the environment changes in an unpredictable way, it is the belief function, a primitive of the model, that guides the economy to a new steady state. And I can envision models where expectations on the transition path are systematically wrong.

I need to read Roger’s papers about this, but I am left wondering by what mechanism the belief function guides the economy to a steady state? It seems to me that the result requires some pretty strong assumptions.

The recent ‘nonlinearity debate’ on the blogs confuses the existence of multiple steady states in a dynamic model with the existence of multiple rational expectations equilibria. Nonlinearity is neither necessary nor sufficient for the existence of multiplicity. A linear model can have a unique indeterminate steady state associated with an infinite dimensional continuum of locally stable rational expectations equilibria. A linear model can also have a continuum of attracting points, each of which is an equilibrium. These are not just curiosities. Both of these properties characterize modern dynamic equilibrium models of the real economy.

I’m afraid that I don’t quite get the distinction that is being made here. Does “multiple steady states in a dynamic model” mean multiple equilibria of the full Arrow-Debreu general equilibrium model? And does “multiple rational-expectations equilibria” mean multiple equilibria conditional on the expectations of the agents? And I also am not sure what the import of this distinction is supposed to be.

My further question is, how does all of this relate to Leijonhfuvud’s idea of the corridor, which Roger has endorsed? My own understanding of what Axel means by the corridor is that the corridor has certain stability properties that keep the economy from careening out of control, i.e. becoming subject to a cumulative dynamic process that does not lead the economy back to the neighborhood of a stable equilibrium. But if there is a continuum of attracting points, each of which is an equilibrium, how could any of those points be understood to be outside the corridor?

Anyway, those are my questions. I am hoping that Roger can enlighten me.

Price Stickiness and Macroeconomics

Noah Smith has a classically snide rejoinder to Stephen Williamson’s outrage at Noah’s Bloomberg paean to price stickiness and to the classic Ball and Maniw article on the subject, an article that provoked an embarrassingly outraged response from Robert Lucas when published over 20 years ago. I don’t know if Lucas ever got over it, but evidently Williamson hasn’t.

Now to be fair, Lucas’s outrage, though misplaced, was understandable, at least if one understands that Lucas was so offended by the ironic tone in which Ball and Mankiw cast themselves as defenders of traditional macroeconomics – including both Keynesians and Monetarists – against the onslaught of “heretics” like Lucas, Sargent, Kydland and Prescott that he just stopped reading after the first few pages and then, in a fit of righteous indignation, wrote a diatribe attacking Ball and Mankiw as religious fanatics trying to halt the progress of science as if that was the real message of the paper – not, to say the least, a very sophisticated reading of what Ball and Mankiw wrote.

While I am not hostile to the idea of price stickiness — one of the most popular posts I have written being an attempt to provide a rationale for the stylized (though controversial) fact that wages are stickier than other input, and most output, prices — it does seem to me that there is something ad hoc and superficial about the idea of price stickiness and about many explanations, including those offered by Ball and Mankiw, for price stickiness. I think that the negative reactions that price stickiness elicits from a lot of economists — and not only from Lucas and Williamson — reflect a feeling that price stickiness is not well grounded in any economic theory.

Let me offer a slightly different criticism of price stickiness as a feature of macroeconomic models, which is simply that although price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

The usual argument is that if prices are free to adjust in response to market forces, they will adjust to balance supply and demand, and an equilibrium will be restored by the automatic adjustment of prices. That is what students are taught in Econ 1. And it is an important lesson, but it is also a “partial” lesson. It is partial, because it applies to a single market that is out of equilibrium. The implicit assumption in that exercise is that nothing else is changing, which means that all other markets — well, not quite all other markets, but I will ignore that nuance – are in equilibrium. That’s what I mean when I say (as I have done before) that just as macroeconomics needs microfoundations, microeconomics needs macrofoundations.

Now it’s pretty easy to show that in a single market with an upward-sloping supply curve and a downward-sloping demand curve, that a price-adjustment rule that raises price when there’s an excess demand and reduces price when there’s an excess supply will lead to an equilibrium market price. But that simple price-adjustment rule is hard to generalize when many markets — not just one — are in disequilibrium, because reducing disequilibrium in one market may actually exacerbate disequilibrium, or create a disequilibrium that wasn’t there before, in another market. Thus, even if there is an equilibrium price vector out there, which, if it were announced to all economic agents, would sustain a general equilibrium in all markets, there is no guarantee that following the standard price-adjustment rule of raising price in markets with an excess demand and reducing price in markets with an excess supply will ultimately lead to the equilibrium price vector. Even more disturbing, the standard price-adjustment rule may not, even under a tatonnement process in which no trading is allowed at disequilibrium prices, lead to the discovery of the equilibrium price vector. Of course, in the real world trading occurs routinely at disequilibrium prices, so that the “mechanical” forces tending an economy toward equilibrium are even weaker than the standard analysis of price-adjustment would suggest.

This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists. Franklin Fisher made this point 30 years ago in an important, but insufficiently appreciated, volume Disequilibrium Foundations of Equilibrium Economics. But the idea goes back even further: to Hayek’s important work on intertemporal equilibrium, especially his classic paper “Economics and Knowledge,” formalized by Hicks in the temporary-equilibrium model described in Value and Capital.

The key point made by Hayek in this context is that there can be an intertemporal equilibrium if and only if all agents formulate their individual plans on the basis of the same expectations of future prices. If their expectations for future prices are not the same, then any plans based on incorrect price expectations will have to be revised, or abandoned altogether, as price expectations are disappointed over time. For price adjustment to lead an economy back to equilibrium, the price adjustment must converge on an equilibrium price vector and on correct price expectations. But, as Hayek understood in 1937, and as Fisher explained in a dense treatise 30 years ago, we have no economic theory that explains how such a price vector, even if it exists, is arrived at, and even under a tannonement process, much less under decentralized price setting. Pinning the blame on this vague thing called price stickiness doesn’t address the deeper underlying theoretical issue.

Of course for Lucas et al. to scoff at price stickiness on these grounds is a bit rich, because Lucas and his followers seem entirely comfortable with assuming that the equilibrium price vector is rationally expected. Indeed, rational expectation of the equilibrium price vector is held up by Lucas as precisely the microfoundation that transformed the unruly field of macroeconomics into a real science.

Krugman on the Volcker Disinflation

Earlier in the week, Paul Krugman wrote about the Volcker disinflation of the 1980s. Krugman’s annoyance at Stephen Moore (whom Krugman flatters by calling him an economist) and John Cochrane (whom Krugman disflatters by comparing him to Stephen Moore) is understandable, but he has less excuse for letting himself get carried away in an outburst of Keynesian triumphalism.

Right-wing economists like Stephen Moore and John Cochrane — it’s becoming ever harder to tell the difference — have some curious beliefs about history. One of those beliefs is that the experience of disinflation in the 1980s was a huge shock to Keynesians, refuting everything they believed. What makes this belief curious is that it’s the exact opposite of the truth. Keynesians came into the Volcker disinflation — yes, it was mainly the Fed’s doing, not Reagan’s — with a standard, indeed textbook, model of what should happen. And events matched their expectations almost precisely.

I’ve been cleaning out my library, and just unearthed my copy of Dornbusch and Fischer’s Macroeconomics, first edition, copyright 1978. Quite a lot of that book was concerned with inflation and disinflation, using an adaptive-expectations Phillips curve — that is, an assumed relationship in which the current inflation rate depends on the unemployment rate and on lagged inflation. Using that approach, they laid out at some length various scenarios for a strategy of reducing the rate of money growth, and hence eventually reducing inflation. Here’s one of their charts, with the top half showing inflation and the bottom half showing unemployment:




Not the cleanest dynamics in the world, but the basic point should be clear: cutting inflation would require a temporary surge in unemployment. Eventually, however, unemployment could come back down to more or less its original level; this temporary surge in unemployment would deliver a permanent reduction in the inflation rate, because it would change expectations.

And here’s what the Volcker disinflation actually looked like:


A temporary but huge surge in unemployment, with inflation coming down to a sustained lower level.

So were Keynesian economists feeling amazed and dismayed by the events of the 1980s? On the contrary, they were feeling pretty smug: disinflation had played out exactly the way the models in their textbooks said it should.

Well, this is true, but only up to a point. What Krugman neglects to mention, which is why the Volcker disinflation is not widely viewed as having enhanced the Keynesian forecasting record, is that most Keynesians had opposed the Reagan tax cuts, and one of their main arguments was that the tax cuts would be inflationary. However, in the Reagan-Volcker combination of loose fiscal policy and tight money, it was tight money that dominated. Score one for the Monetarists. The rapid drop in inflation, though accompanied by high unemployment, was viewed as a vindication of the Monetarist view that inflation is always and everywhere a monetary phenomenon, a view which now seems pretty commonplace, but in the 1970s and 1980s was hotly contested, including by Keynesians.

However, the (Friedmanian) Monetarist view was only partially vindicated, because the Volcker disinflation was achieved by way of high interest rates not by tightly controlling the money supply. As I have written before on this blog (here and here) and in chapter 10 of my book on free banking (especially, pp. 214-21), Volcker actually tried very hard to slow down the rate of growth in the money supply, but the attempt to implement a k-percent rule induced perverse dynamics, creating a precautionary demand for money whenever monetary growth overshot the target range, the anticipation of an imminent future tightening causing people, fearful that cash would soon be unavailable, to hoard cash by liquidating assets before the tightening. The scenario played itself out repeatedly in the 1981-82 period, when the most closely watched economic or financial statistic in the world was the Fed’s weekly report of growth in the money supply, with growth rates over the target range being associated with falling stock and commodities prices. Finally, in the summer of 1982, Volcker announced that the Fed would stop trying to achieve its money growth targets, and the great stock market rally of the 1980s took off, and economic recovery quickly followed.

So neither the old-line Keynesian dismissal of monetary policy as irrelevant to the control of inflation, nor the Monetarist obsession with controlling the monetary aggregates fared very well in the aftermath of the Volcker disinflation. The result was the New Keynesian focus on monetary policy as the key tool for macroeconomic stabilization, except that monetary policy no longer meant controlling a targeted monetary aggregate, but controlling a targeted interest rate (as in the Taylor rule).

But Krugman doesn’t mention any of this, focusing instead on the conflicts among  non-Keynesians.

Indeed, it was the other side of the macro divide that was left scrambling for answers. The models Chicago was promoting in the 1970s, based on the work of Robert Lucas and company, said that unemployment should have come down quickly, as soon as people realized that the Fed really was bringing down inflation.

Lucas came to Chicago in 1975, and he was the wave of the future at Chicago, but it’s not as if Friedman disappeared; after all, he did win the Nobel Prize in 1976. And although Friedman did not explicitly attack Lucas, it’s clear that, to his credit, Friedman never bought into the rational-expectations revolution. So although Friedman may have been surprised at the depth of the 1981-82 recession – in part attributable to the perverse effects of the money-supply targeting he had convinced the Fed to adopt – the adaptive-expectations model in the Dornbusch-Fischer macro textbook is as much Friedmanian as Keynesian. And by the way, Dornbush and Fischer were both at Chicago in the mid 1970s when the first edition of their macro text was written.

By a few years into the 80s it was obvious that those models were unsustainable in the face of the data. But rather than admit that their dismissal of Keynes was premature, most of those guys went into real business cycle theory — basically, denying that the Fed had anything to do with recessions. And from there they just kept digging ever deeper into the rabbit hole.

But anyway, what you need to know is that the 80s were actually a decade of Keynesian analysis triumphant.

I am just as appalled as Krugman by the real-business-cycle episode, but it was as much a rejection of Friedman, and of all other non-Keynesian monetary theory, as of Keynes. So the inspiring morality tale spun by Krugman in which the hardy band of true-blue Keynesians prevail against those nasty new classical barbarians is a bit overdone and vastly oversimplified.

Explaining the Hegemony of New Classical Economics

Simon Wren-Lewis, Robert Waldmann, and Paul Krugman have all recently devoted additional space to explaining – ruefully, for the most part – how it came about that New Classical Economics took over mainstream macroeconomics just about half a century after the Keynesian Revolution. And Mark Thoma got them all started by a complaint about the sorry state of modern macroeconomics and its failure to prevent or to cure the Little Depression.

Wren-Lewis believes that the main problem with modern macro is too much of a good thing, the good thing being microfoundations. Those microfoundations, in Wren-Lewis’s rendering, filled certain gaps in the ad hoc Keynesian expenditure functions. Although the gaps were not as serious as the New Classical School believed, adding an explicit model of intertemporal expenditure plans derived from optimization conditions and rational expectations, was, in Wren-Lewis’s estimation, an improvement on the old Keynesian theory. The improvements could have been easily assimilated into the old Keynesian theory, but weren’t because New Classicals wanted to junk, not improve, the received Keynesian theory.

Wren-Lewis believes that it is actually possible for the progeny of Keynes and the progeny of Fisher to coexist harmoniously, and despite his discomfort with the anti-Keynesian bias of modern macroeconomics, he views the current macroeconomic research program as progressive. By progressive, I interpret him to mean that macroeconomics is still generating new theoretical problems to investigate, and that attempts to solve those problems are producing a stream of interesting and useful publications – interesting and useful, that is, to other economists doing macroeconomic research. Whether the problems and their solutions are useful to anyone else is perhaps not quite so clear. But even if interest in modern macroeconomics is largely confined to practitioners of modern macroeconomics, that fact alone would not conclusively show that the research program in which they are engaged is not progressive, the progressiveness of the research program requiring no more than a sufficient number of self-selecting econ grad students, and a willingness of university departments and sources of research funding to cater to the idiosyncratic tastes of modern macroeconomists.

Robert Waldmann, unsurprisingly, takes a rather less charitable view of modern macroeconomics, focusing on its failure to discover any new, previously unknown, empirical facts about macroeconomic, or to better explain known facts than do alternative models, e.g., by more accurately predicting observed macro time-series data. By that, admittedly, demanding criterion, Waldmann finds nothing progressive in the modern macroeconomics research program.

Paul Krugman weighed in by emphasizing not only the ideological agenda behind the New Classical Revolution, but the self-interest of those involved:

Well, while the explicit message of such manifestos is intellectual – this is the only valid way to do macroeconomics – there’s also an implicit message: from now on, only my students and disciples will get jobs at good schools and publish in major journals/ And that, to an important extent, is exactly what happened; Ken Rogoff wrote about the “scars of not being able to publish stick-price papers during the years of new classical repression.” As time went on and members of the clique made up an ever-growing share of senior faculty and journal editors, the clique’s dominance became self-perpetuating – and impervious to intellectual failure.

I don’t disagree that there has been intellectual repression, and that this has made professional advancement difficult for those who don’t subscribe to the reigning macroeconomic orthodoxy, but I think that the story is more complicated than Krugman suggests. The reason I say that is because I cannot believe that the top-ranking economics departments at schools like MIT, Harvard, UC Berkeley, Princeton, and Penn, and other supposed bastions of saltwater thinking have bought into the underlying New Classical ideology. Nevertheless, microfounded DSGE models have become de rigueur for any serious academic macroeconomic theorizing, not only in the Journal of Political Economy (Chicago), but in the Quarterly Journal of Economics (Harvard), the Review of Economics and Statistics (MIT), and the American Economic Review. New Keynesians, like Simon Wren-Lewis, have made their peace with the new order, and old Keynesians have been relegated to the periphery, unable to publish in the journals that matter without observing the generally accepted (even by those who don’t subscribe to New Classical ideology) conventions of proper macroeconomic discourse.

So I don’t think that Krugman’s ideology plus self-interest story fully explains how the New Classical hegemony was achieved. What I think is missing from his story is the spurious methodological requirement of microfoundations foisted on macroeconomists in the course of the 1970s. I have discussed microfoundations in a number of earlier posts (here, here, here, here, and here) so I will try, possibly in vain, not to repeat myself too much.

The importance and desirability of microfoundations were never questioned. What, after all, was the neoclassical synthesis, if not an attempt, partly successful and partly unsuccessful, to integrate monetary theory with value theory, or macroeconomics with microeconomics? But in the early 1970s the focus of attempts, notably in the 1970 Phelps volume, to provide microfoundations changed from embedding the Keynesian system in a general-equilibrium framework, as Patinkin had done, to providing an explicit microeconomic rationale for the Keynesian idea that the labor market could not be cleared via wage adjustments.

In chapter 19 of the General Theory, Keynes struggled to come up with a convincing general explanation for the failure of nominal-wage reductions to clear the labor market. Instead, he offered an assortment of seemingly ad hoc arguments about why nominal-wage adjustments would not succeed in reducing unemployment, enabling all workers willing to work at the prevailing wage to find employment at that wage. This forced Keynesians into the awkward position of relying on an argument — wages tend to be sticky, especially in the downward direction — that was not really different from one used by the “Classical Economists” excoriated by Keynes to explain high unemployment: that rigidities in the price system – often politically imposed rigidities – prevented wage and price adjustments from equilibrating demand with supply in the textbook fashion.

These early attempts at providing microfoundations were largely exercises in applied price theory, explaining why self-interested behavior by rational workers and employers lacking perfect information about all potential jobs and all potential workers would not result in immediate price adjustments that would enable all workers to find employment at a uniform market-clearing wage. Although these largely search-theoretic models led to a more sophisticated and nuanced understanding of labor-market dynamics than economists had previously had, the models ultimately did not provide a fully satisfactory account of cyclical unemployment. But the goal of microfoundations was to explain a certain set of phenomena in the labor market that had not been seriously investigated, in the hope that price and wage stickiness could be analyzed as an economic phenomenon rather than being arbitrarily introduced into models as an ad hoc, albeit seemingly plausible, assumption.

But instead of pursuing microfoundations as an explanatory strategy, the New Classicals chose to impose it as a methodological prerequisite. A macroeconomic model was inadmissible unless it could be explicitly and formally derived from the optimizing choices of fully rational agents. Instead of trying to enrich and potentially transform the Keynesian model with a deeper analysis and understanding of the incentives and constraints under which workers and employers make decisions, the New Classicals used microfoundations as a methodological tool by which to delegitimize Keynesian models, those models being insufficiently or improperly microfounded. Instead of using microfoundations as a method by which to make macroeconomic models conform more closely to the imperfect and limited informational resources available to actual employers deciding to hire or fire employees, and actual workers deciding to accept or reject employment opportunities, the New Classicals chose to use microfoundations as a methodological justification for the extreme unrealism of the rational-expectations assumption, portraying it as nothing more than the consistent application of the rationality postulate underlying standard neoclassical price theory.

For the New Classicals, microfoundations became a reductionist crusade. There is only one kind of economics, and it is not macroeconomics. Even the idea that there could be a conceptual distinction between micro and macroeconomics was unacceptable to Robert Lucas, just as the idea that there is, or could be, a mind not reducible to the brain is unacceptable to some deranged neuroscientists. No science, not even chemistry, has been reduced to physics. Were it ever to be accomplished, the reduction of chemistry to physics would be a great scientific achievement. Some parts of chemistry have been reduced to physics, which is a good thing, especially when doing so actually enhances our understanding of the chemical process and results in an improved, or more exact, restatement of the relevant chemical laws. But it would be absurd and preposterous simply to reject, on supposed methodological principle, those parts of chemistry that have not been reduced to physics. And how much more absurd would it be to reject higher-level sciences, like biology and ecology, for no other reason than that they have not been reduced to physics.

But reductionism is what modern macroeconomics, under the New Classical hegemony, insists on. No exceptions allowed; don’t even ask. Meekly and unreflectively, modern macroeconomics has succumbed to the absurd and arrogant methodological authoritarianism of the New Classical Revolution. What an embarrassment.

UPDATE (11:43 AM EDST): I made some minor editorial revisions to eliminate some grammatical errors and misplaced or superfluous words.

Temporary Equilibrium One More Time

It’s always nice to be noticed, especially by Paul Krugman. So I am not upset, but in his response to my previous post, I don’t think that Krugman quite understood what I was trying to convey. I will try to be clearer this time. It will be easiest if I just quote from his post and insert my comments or explanations.

Glasner is right to say that the Hicksian IS-LM analysis comes most directly not out of Keynes but out of Hicks’s own Value and Capital, which introduced the concept of “temporary equilibrium”.

Actually, that’s not what I was trying to say. I wasn’t making any explicit connection between Hicks’s temporary-equilibrium concept from Value and Capital and the IS-LM model that he introduced two years earlier in his paper on Keynes and the Classics. Of course that doesn’t mean that the temporary equilibrium method isn’t connected to the IS-LM model; one would need to do a more in-depth study than I have done of Hicks’s intellectual development to determine how much IS-LM was influenced by Hicks’s interest in intertemporal equilibrium and in the method of temporary equilibrium as a way of analyzing intertemporal issues.

This involves using quasi-static methods to analyze a dynamic economy, not because you don’t realize that it’s dynamic, but simply as a tool. In particular, V&C discussed at some length a temporary equilibrium in a three-sector economy, with goods, bonds, and money; that’s essentially full-employment IS-LM, which becomes the 1937 version with some price stickiness. I wrote about that a long time ago.

Now I do think that it’s fair to say that the IS-LM model was very much in the spirit of Value and Capital, in which Hicks deployed an explicit general-equilibrium model to analyze an economy at a Keynesian level of aggregation: goods, bonds, and money. But the temporary-equilibrium aspect of Value and Capital went beyond the Keynesian analysis, because the temporary equilibrium analysis was explicitly intertemporal, all agents formulating plans based on explicit future price expectations, and the inconsistency between expected prices and actual prices was explicitly noted, while in the General Theory, and in IS-LM, price expectations were kept in the background, making an appearance only in the discussion of the marginal efficiency of capital.

So is IS-LM really Keynesian? I think yes — there is a lot of temporary equilibrium in The General Theory, even if there’s other stuff too. As I wrote in the last post, one key thing that distinguished TGT from earlier business cycle theorizing was precisely that it stopped trying to tell a dynamic story — no more periods, forced saving, boom and bust, instead a focus on how economies can stay depressed. Anyway, does it matter? The real question is whether the method of temporary equilibrium is useful.

That is precisely where I think Krugman’s grasp on the concept of temporary equilibrium is slipping. Temporary equilibrium is indeed about periods, and it is explicitly dynamic. In my previous post I referred to Hicks’s discussion in Capital and Growth, about 25 years after writing Value and Capital, in which he wrote

The Temporary Equilibrium model of Value and Capital, also, is “quasi-static” [like the Keynes theory] – in just the same sense. The reason why I was contented with such a model was because I had my eyes fixed on Keynes.

As I read this passage now — and it really bothered me when I read it as I was writing my previous post — I realize that what Hicks was saying was that his desire to conform to the Keynesian paradigm led him to compromise the integrity of the temporary equilibrium model, by forcing it to be “quasi-static” when it really was essentially dynamic. The challenge has been to convert a “quasi-static” IS-LM model into something closer to the temporary-equilibrium method that Hicks introduced, but did not fully execute in Value and Capital.

What are the alternatives? One — which took over much of macro — is to do intertemporal equilibrium all the way, with consumers making lifetime consumption plans, prices set with the future rationally expected, and so on. That’s DSGE — and I think Glasner and I agree that this hasn’t worked out too well. In fact, economists who never learned temporary-equiibrium-style modeling have had a strong tendency to reinvent pre-Keynesian fallacies (cough-Say’s Law-cough), because they don’t know how to think out of the forever-equilibrium straitjacket.

Yes, I agree! Rational expectations, full-equilibrium models have turned out to be a regression, not an advance. But the way I would make the point is that the temporary-equilibrium method provides a sort of a middle way to do intertemporal dynamics without presuming that consumption plans and investment plans are always optimal.

What about disequilibrium dynamics all the way? Basically, I have never seen anyone pull this off. Like the forever-equilibrium types, constant-disequilibrium theorists have a remarkable tendency to make elementary conceptual mistakes.

Again, I agree. We can’t work without some sort of equilibrium conditions, but temporary equilibrium provides a way to keep the discipline of equilibrium without assuming (nearly) full optimality.

Still, Glasner says that temporary equilibrium must involve disappointed expectations, and fails to take account of the dynamics that must result as expectations are revised.

Perhaps I was unclear, but I thought I was saying just the opposite. It’s the “quasi-static” IS-LM model, not temporary equilibrium, that fails to take account of the dynamics produced by revised expectations.

I guess I’d say two things. First, I’m not sure that this is always true. Hicks did indeed assume static expectations — the future will be like the present; but in Keynes’s vision of an economy stuck in sustained depression, such static expectations will be more or less right.

Again, I agree. There may be self-fulfilling expectations of a low-income, low-employment equilibrium. But I don’t think that that is the only explanation for such a situation, and certainly not for the downturn that can lead to such an equilibrium.

Second, those of us who use temporary equilibrium often do think in terms of dynamics as expectations adjust. In fact, you could say that the textbook story of how the short-run aggregate supply curve adjusts over time, eventually restoring full employment, is just that kind of thing. It’s not a great story, but it is the kind of dynamics Glasner wants — and it’s Econ 101 stuff.

Again, I agree. It’s not a great story, but, like it or not, the story is not a Keynesian story.

So where does this leave us? I’m not sure, but my impression is that Krugman, in his admiration for the IS-LM model, is trying too hard to identify IS-LM with the temporary-equilibrium approach, which I think represented a major conceptual advance over both the Keynesian model and the IS-LM representation of the Keynesian model. Temporary equilibrium and IS-LM are not necessarily inconsistent, but I mainly wanted to point out that the two aren’t the same, and shouldn’t be conflated.

Paul Krugman and Roger Farmer on Sticky Wages

I was pleasantly surprised last Friday to see that Paul Krugman took favorable notice of my post about sticky wages, but also registering some disagreement.

[Glasner] is partially right in suggesting that there has been a bit of a role reversal regarding the role of sticky wages in recessions: Keynes asserted that wage flexibility would not help, but Keynes’s self-proclaimed heirs ended up putting downward nominal wage rigidity at the core of their analysis. By the way, this didn’t start with the New Keynesians; way back in the 1940s Franco Modigliani had already taught us to think that everything depended on M/w, the ratio of the money supply to the wage rate.

That said, wage stickiness plays a bigger role in The General Theory — and in modern discussions that are consistent with what Keynes said — than Glasner indicates.

To document his assertion about Keynes, Krugman quotes a passage from the General Theory in which Keynes seems to suggest that in the nineteenth century inflexible wages were partially compensated for by price level movements. One might quibble with Krugman’s interpretation, but the payoff doesn’t seem worth the effort.

But I will quibble with the next paragraph in Krugman’s post.

But there’s another point: even if you don’t think wage flexibility would help in our current situation (and like Keynes, I think it wouldn’t), Keynesians still need a sticky-wage story to make the facts consistent with involuntary unemployment. For if wages were flexible, an excess supply of labor should be reflected in ever-falling wages. If you want to say that we have lots of willing workers unable to find jobs — as opposed to moochers not really seeking work because they’re cradled in Paul Ryan’s hammock — you have to have a story about why wages aren’t falling.

Not that I really disagree with Krugman that the behavior of wages since the 2008 downturn is consistent with some stickiness in wages. Nevertheless, it is still not necessarily the case that, if wages were flexible, an excess supply of labor would lead to ever-falling wages. In a search model of unemployment, if workers are expecting wages to rise every year at a 3% rate, and instead wages rise at only a 1% rate, the model predicts that unemployment will rise, and will continue to rise (or at least not return to the natural rate) as long as observed wages did not increase as fast as workers were expecting wages to rise. Presumably over time, wage expectations would adjust to a new lower rate of increase, but there is no guarantee that the transition would be speedy.

Krugman concludes:

So sticky wages are an important part of both old and new Keynesian analysis, not because wage cuts would help us, but simply to make sense of what we see.

My own view is actually a bit more guarded. I think that “sticky wages” is simply a name that we apply to a problematic phenomenon for ehich we still haven’t found a really satisfactory explanation for. Search models, for all their theoretical elegance, simply can’t explain the observed process by which unemployment rises during recessions, i.e., by layoffs and a lack of job openings rather than an increase in quits and refused offers, as search models imply. The suggestion in my earlier post was intended to offer a possible basis of understanding what the phrase “sticky wages” is actually describing.

Roger Farmer, a long-time and renowned UCLA economist, also commented on my post on his new blog. Welcome to the blogosphere, Roger.

Roger has a different take on the sticky-wage phenomenon. Roger argues, as did some of the commenters to my post, that wages are not sticky. To document this assertion, Roger presents a diagram showing that the decline of nominal wages closely tracked that of prices for the first six years of the Great Depression. From this evidence Roger concludes that nominal wage rigidity is not the cause of rising unemployment during the Great Depression, and presumably, not the cause of rising unemployment in the Little Depression.

farmer_sticky_wagesInstead, Roger argues, the rise in unemployment was caused by an outbreak of self-fulfilling pessimism. Roger believes that there are many alternative equilibria and which equilibrium (actually equilibrium time path) we reach depends on what our expectations are. Roger also believes that our expectations are rational, so that we get what we expect, as he succinctly phrases it “beliefs are fundamental.” I have a lot of sympathy for this way of looking at the economy. In fact one of the early posts on this blog was entitled “Expectations are Fundamental.” But as I have explained in other posts, I am not so sure that expectations are rational in any useful sense, because I think that individual expectations diverge. I don’t think that there is a single way of looking at reality. If there are many potential equilibria, why should everyone expect the same equilibrium. I can be an optimist, and you can be a pessimist. If we agreed, we would be right, but if we disagree, we will both be wrong. What economic mechanism is there to reconcile our expectations? In a world in which expectations diverge — a world of temporary equilibrium — there can be cumulative output reductions that get propagated across the economy as each sector fails to produce its maximum potential output, thereby reducing the demand for the output of other sectors to which it is linked. That’s what happens when there is trading at prices that don’t correspond to the full optimum equilibrium solution.

So I agree with Roger in part, but I think that the coordination problem is (at least potentially) more serious than he imagines.


About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 355 other followers


Follow

Get every new post delivered to your Inbox.

Join 355 other followers