Archive for the 'microfoundations' Category



Romer v. Lucas

A couple of months ago, Paul Romer created a stir by publishing a paper in the American Economic Review “Mathiness in the Theory of Economic Growth,” an attack on two papers, one by McGrattan and Prescott and the other by Lucas and Moll on aspects of growth theory. He accused the authors of those papers of using mathematical modeling as a cover behind which to hide assumptions guaranteeing results by which the authors could promote their research agendas. In subsequent blog posts, Romer has sharpened his attack, focusing it more directly on Lucas, whom he accuses of a non-scientific attachment to ideological predispositions that have led him to violate what he calls Feynman integrity, a concept eloquently described by Feynman himself in a 1974 commencement address at Caltech.

It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked–to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can–if you know anything at all wrong, or possibly wrong–to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

Romer contrasts this admirable statement of what scientific integrity means with another by George Stigler, seemingly justifying, or at least excusing, a kind of special pleading on behalf of one’s own theory. And the institutional and perhaps ideological association between Stigler and Lucas seems to suggest that Lucas is inclined to follow the permissive and flexible Stiglerian ethic rather than rigorous Feynman standard of scientific integrity. Romer regards this as a breach of the scientific method and a step backward for economics as a science.

I am not going to comment on the specific infraction that Romer accuses Lucas of having committed; I am not familiar with the mathematical question in dispute. Certainly if Lucas was aware that his argument in the paper Romer criticizes depended on the particular mathematical assumption in question, Lucas should have acknowledged that to be the case. And even if, as Lucas asserted in responding to a direct question by Romer, he could have derived the result in a more roundabout way, then he should have pointed that out, too. However, I don’t regard the infraction alleged by Romer to be more than a misdemeanor, hardly a scandalous breach of the scientific method.

Why did Lucas, who as far as I can tell was originally guided by Feynman integrity, switch to the mode of Stigler conviction? Market clearing did not have to evolve from auxiliary hypothesis to dogma that could not be questioned.

My conjecture is economists let small accidents of intellectual history matter too much. If we had behaved like scientists, things could have turned out very differently. It is worth paying attention to these accidents because doing so might let us take more control over the process of scientific inquiry that we are engaged in. At the very least, we should try to reduce the odds that that personal frictions and simple misunderstandings could once again cause us to veer off on some damaging trajectory.

I suspect that it was personal friction and a misunderstanding that encouraged a turn toward isolation (or if you prefer, epistemic closure) by Lucas and colleagues. They circled the wagons because they thought that this was the only way to keep the rational expectations revolution alive. The misunderstanding is that Lucas and his colleagues interpreted the hostile reaction they received from such economists as Robert Solow to mean that they were facing implacable, unreasoning resistance from such departments as MIT. In fact, in a remarkably short period of time, rational expectations completely conquered the PhD program at MIT.

More recently Romer, having done graduate work both at MIT and Chicago in the late 1970s, has elaborated on the personal friction between Solow and Lucas and how that friction may have affected Lucas, causing him to disengage from the professional mainstream. Paul Krugman, who was at MIT when this nastiness was happening, is skeptical of Romer’s interpretation.

My own view is that being personally and emotionally attached to one’s own theories, whether for religious or ideological or other non-scientific reasons, is not necessarily a bad thing as long as there are social mechanisms allowing scientists with different scientific viewpoints an opportunity to make themselves heard. If there are such mechanisms, the need for Feynman integrity is minimized, because individual lapses of integrity will be exposed and remedied by criticism from other scientists; scientific progress is possible even if scientists don’t live up to the Feynman standards, and maintain their faith in their theories despite contradictory evidence. But, as I am going to suggest below, there are reasons to doubt that social mechanisms have been operating to discipline – not suppress, just discipline – dubious economic theorizing.

My favorite example of the importance of personal belief in, and commitment to the truth of, one’s own theories is Galileo. As discussed by T. S. Kuhn in The Structure of Scientific Revolutions. Galileo was arguing for a paradigm change in how to think about the universe, despite being confronted by empirical evidence that appeared to refute the Copernican worldview he believed in: the observations that the sun revolves around the earth, and that the earth, as we directly perceive it, is, apart from the occasional earthquake, totally stationary — good old terra firma. Despite that apparently contradictory evidence, Galileo had an alternative vision of the universe in which the obvious movement of the sun in the heavens was explained by the spinning of the earth on its axis, and the stationarity of the earth by the assumption that all our surroundings move along with the earth, rendering its motion imperceptible, our perception of motion being relative to a specific frame of reference.

At bottom, this was an almost metaphysical world view not directly refutable by any simple empirical test. But Galileo adopted this worldview or paradigm, because he deeply believed it to be true, and was therefore willing to defend it at great personal cost, refusing to recant his Copernican view when he could have easily appeased the Church by describing the Copernican theory as just a tool for predicting planetary motion rather than an actual representation of reality. Early empirical tests did not support heliocentrism over geocentrism, but Galileo had faith that theoretical advancements and improved measurements would eventually vindicate the Copernican theory. He was right of course, but strict empiricism would have led to a premature rejection of heliocentrism. Without a deep personal commitment to the Copernican worldview, Galileo might not have articulated the case for heliocentrism as persuasively as he did, and acceptance of heliocentrism might have been delayed for a long time.

Imre Lakatos called such deeply-held views underlying a scientific theory the hard core of the theory (aka scientific research program), a set of beliefs that are maintained despite apparent empirical refutation. The response to any empirical refutation is not to abandon or change the hard core but to adjust what Lakatos called the protective belt of the theory. Eventually, as refutations or empirical anomalies accumulate, the research program may undergo a crisis, leading to its abandonment, or it may simply degenerate if it fails to solve new problems or discover any new empirical facts or regularities. So Romer’s criticism of Lucas’s dogmatic attachment to market clearing – Lucas frequently makes use of ad hoc price stickiness assumptions; I don’t know why Romer identifies market-clearing as a Lucasian dogma — may be no more justified from a history of science perspective than would criticism of Galileo’s dogmatic attachment to heliocentrism.

So while I have many problems with Lucas, lack of Feynman integrity is not really one of them, certainly not in the top ten. What I find more disturbing is his narrow conception of what economics is. As he himself wrote in an autobiographical sketch for Lives of the Laureates, he was bewitched by the beauty and power of Samuelson’s Foundations of Economic Analysis when he read it the summer before starting his training as a graduate student at Chicago in 1960. Although it did not have the transformative effect on me that it had on Lucas, I greatly admire the Foundations, but regardless of whether Samuelson himself meant to suggest such an idea (which I doubt), it is absurd to draw this conclusion from it:

I loved the Foundations. Like so many others in my cohort, I internalized its view that if I couldn’t formulate a problem in economic theory mathematically, I didn’t know what I was doing. I came to the position that mathematical analysis is not one of many ways of doing economic theory: It is the only way. Economic theory is mathematical analysis. Everything else is just pictures and talk.

Oh, come on. Would anyone ever think that unless you can formulate the problem of whether the earth revolves around the sun or the sun around the earth mathematically, you don’t know what you are doing? And, yet, remarkably, on the page following that silly assertion, one finds a totally brilliant description of what it was like to take graduate price theory from Milton Friedman.

Friedman rarely lectured. His class discussions were often structured as debates, with student opinions or newspaper quotes serving to introduce a problem and some loosely stated opinions about it. Then Friedman would lead us into a clear statement of the problem, considering alternative formulations as thoroughly as anyone in the class wanted to. Once formulated, the problem was quickly analyzed—usually diagrammatically—on the board. So we learned how to formulate a model, to think about and decide which features of a problem we could safely abstract from and which he needed to put at the center of the analysis. Here “model” is my term: It was not a term that Friedman liked or used. I think that for him talking about modeling would have detracted from the substantive seriousness of the inquiry we were engaged in, would divert us away from the attempt to discover “what can be done” into a merely mathematical exercise. [my emphasis].

Despite his respect for Friedman, it’s clear that Lucas did not adopt and internalize Friedman’s approach to economic problem solving, but instead internalized the caricature he extracted from Samuelson’s Foundations: that mathematical analysis is the only legitimate way of doing economic theory, and that, in particular, the essence of macroeconomics consists in a combination of axiomatic formalism and philosophical reductionism (microfoundationalism). For Lucas, the only scientifically legitimate macroeconomic models are those that can be deduced from the axiomatized Arrow-Debreu-McKenzie general equilibrium model, with solutions that can be computed and simulated in such a way that the simulations can be matched up against the available macroeconomics time series on output, investment and consumption.

This was both bad methodology and bad science, restricting the formulation of economic problems to those for which mathematical techniques are available to be deployed in finding solutions. On the one hand, the rational-expectations assumption made finding solutions to certain intertemporal models tractable; on the other, the assumption was justified as being required by the rationality assumptions of neoclassical price theory.

In a recent review of Lucas’s Collected Papers on Monetary Theory, Thomas Sargent makes a fascinating reference to Kenneth Arrow’s 1967 review of the first two volumes of Paul Samuelson’s Collected Works in which Arrow referred to the problematic nature of the neoclassical synthesis of which Samuelson was a chief exponent.

Samuelson has not addressed himself to one of the major scandals of current price theory, the relation between microeconomics and macroeconomics. Neoclassical microeconomic equilibrium with fully flexible prices presents a beautiful picture of the mutual articulations of a complex structure, full employment being one of its major elements. What is the relation between this world and either the real world with its recurrent tendencies to unemployment of labor, and indeed of capital goods, or the Keynesian world of underemployment equilibrium? The most explicit statement of Samuelson’s position that I can find is the following: “Neoclassical analysis permits of fully stable underemployment equilibrium only on the assumption of either friction or a peculiar concatenation of wealth-liquidity-interest elasticities. . . . [The neoclassical analysis] goes far beyond the primitive notion that, by definition of a Walrasian system, equilibrium must be at full employment.” . . .

In view of the Phillips curve concept in which Samuelson has elsewhere shown such interest, I take the second sentence in the above quotation to mean that wages are stationary whenever unemployment is X percent, with X positive; thus stationary unemployment is possible. In general, one can have a neoclassical model modified by some elements of price rigidity which will yield Keynesian-type implications. But such a model has yet to be constructed in full detail, and the question of why certain prices remain rigid becomes of first importance. . . . Certainly, as Keynes emphasized the rigidity of prices has something to do with the properties of money; and the integration of the demand and supply of money with general competitive equilibrium theory remains incomplete despite attempts beginning with Walras himself.

If the neoclassical model with full price flexibility were sufficiently unrealistic that stable unemployment equilibrium be possible, then in all likelihood the bulk of the theorems derived by Samuelson, myself, and everyone else from the neoclassical assumptions are also contrafactual. The problem is not resolved by what Samuelson has called “the neoclassical synthesis,” in which it is held that the achievement of full employment requires Keynesian intervention but that neoclassical theory is valid when full employment is reached. . . .

Obviously, I believe firmly that the mutual adjustment of prices and quantities represented by the neoclassical model is an important aspect of economic reality worthy of the serious analysis that has been bestowed on it; and certain dramatic historical episodes – most recently the reconversion of the United States from World War II and the postwar European recovery – suggest that an economic mechanism exists which is capable of adaptation to radical shifts in demand and supply conditions. On the other hand, the Great Depression and the problems of developing countries remind us dramatically that something beyond, but including, neoclassical theory is needed.

Perhaps in a future post, I may discuss this passage, including a few sentences that I have omitted here, in greater detail. For now I will just say that Arrow’s reference to a “neoclassical microeconomic equilibrium with fully flexible prices” seems very strange inasmuch as price flexibility has absolutely no role in the proofs of the existence of a competitive general equilibrium for which Arrow and Debreu and McKenzie are justly famous. All the theorems Arrow et al. proved about the neoclassical equilibrium were related to existence, uniqueness and optimaiity of an equilibrium supported by an equilibrium set of prices. Price flexibility was not involved in those theorems, because the theorems had nothing to do with how prices adjust in response to a disequilibrium situation. What makes this juxtaposition of neoclassical microeconomic equilibrium with fully flexible prices even more remarkable is that about eight years earlier Arrow wrote a paper (“Toward a Theory of Price Adjustment”) whose main concern was the lack of any theory of price adjustment in competitive equilibrium, about which I will have more to say below.

Sargent also quotes from two lectures in which Lucas referred to Don Patinkin’s treatise Money, Interest and Prices which provided perhaps the definitive statement of the neoclassical synthesis Samuelson espoused. In one lecture (“My Keynesian Education” presented to the History of Economics Society in 2003) Lucas explains why he thinks Patinkin’s book did not succeed in its goal of integrating value theory and monetary theory:

I think Patinkin was absolutely right to try and use general equilibrium theory to think about macroeconomic problems. Patinkin and I are both Walrasians, whatever that means. I don’t see how anybody can not be. It’s pure hindsight, but now I think that Patinkin’s problem was that he was a student of Lange’s, and Lange’s version of the Walrasian model was already archaic by the end of the 1950s. Arrow and Debreu and McKenzie had redone the whole theory in a clearer, more rigorous, and more flexible way. Patinkin’s book was a reworking of his Chicago thesis from the middle 1940s and had not benefited from this more recent work.

In the other lecture, his 2003 Presidential address to the American Economic Association, Lucas commented further on why Patinkin fell short in his quest to unify monetary and value theory:

When Don Patinkin gave his Money, Interest, and Prices the subtitle “An Integration of Monetary and Value Theory,” value theory meant, to him, a purely static theory of general equilibrium. Fluctuations in production and employment, due to monetary disturbances or to shocks of any other kind, were viewed as inducing disequilibrium adjustments, unrelated to anyone’s purposeful behavior, modeled with vast numbers of free parameters. For us, today, value theory refers to models of dynamic economies subject to unpredictable shocks, populated by agents who are good at processing information and making choices over time. The macroeconomic research I have discussed today makes essential use of value theory in this modern sense: formulating explicit models, computing solutions, comparing their behavior quantitatively to observed time series and other data sets. As a result, we are able to form a much sharper quantitative view of the potential of changes in policy to improve peoples’ lives than was possible a generation ago.

So, as Sargent observes, Lucas recreated an updated neoclassical synthesis of his own based on the intertemporal Arrow-Debreu-McKenzie version of the Walrasian model, augmented by a rationale for the holding of money and perhaps some form of monetary policy, via the assumption of credit-market frictions and sticky prices. Despite the repudiation of the updated neoclassical synthesis by his friend Edward Prescott, for whom monetary policy is irrelevant, Lucas clings to neoclassical synthesis 2.0. Sargent quotes this passage from Lucas’s 1994 retrospective review of A Monetary History of the US by Friedman and Schwartz to show how tightly Lucas clings to neoclassical synthesis 2.0 :

In Kydland and Prescott’s original model, and in many (though not all) of its descendants, the equilibrium allocation coincides with the optimal allocation: Fluctuations generated by the model represent an efficient response to unavoidable shocks to productivity. One may thus think of the model not as a positive theory suited to all historical time periods but as a normative benchmark providing a good approximation to events when monetary policy is conducted well and a bad approximation when it is not. Viewed in this way, the theory’s relative success in accounting for postwar experience can be interpreted as evidence that postwar monetary policy has resulted in near-efficient behavior, not as evidence that money doesn’t matter.

Indeed, the discipline of real business cycle theory has made it more difficult to defend real alternaltives to a monetary account of the 1930s than it was 30 years ago. It would be a term-paper-size exercise, for example, to work out the possible effects of the 1930 Smoot-Hawley Tariff in a suitably adapted real business cycle model. By now, we have accumulated enough quantitative experience with such models to be sure that the aggregate effects of such a policy (in an economy with a 5% foreign trade sector before the Act and perhaps a percentage point less after) would be trivial.

Nevertheless, in the absence of some catastrophic error in monetary policy, Lucas evidently believes that the key features of the Arrow-Debreu-McKenzie model are closely approximated in the real world. That may well be true. But if it is, Lucas has no real theory to explain why.

In his 1959 paper (“Towards a Theory of Price Adjustment”) I just mentioned, Arrow noted that the theory of competitive equilibrium has no explanation of how equilibrium prices are actually set. Indeed, the idea of competitive price adjustment is beset by a paradox: all agents in a general equilibrium being assumed to be price takers, how is it that a new equilibrium price is ever arrived at following any disturbance to an initial equilibrium? Arrow had no answer to the question, but offered the suggestion that, out of equilibrium, agents are not price takers, but price searchers, possessing some measure of market power to set price in the transition between the old and new equilibrium. But the upshot of Arrow’s discussion was that the problem and the paradox awaited solution. Almost sixty years on, some of us are still waiting, but for Lucas and the Lucasians, there is neither problem nor paradox, because the actual price is the equilibrium price, and the equilibrium price is always the (rationally) expected price.

If the social functions of science were being efficiently discharged, this rather obvious replacement of problem solving by question begging would not have escaped effective challenge and opposition. But Lucas was able to provide cover for this substitution by persuading the profession to embrace his microfoundational methodology, while offering irresistible opportunities for professional advancement to younger economists who could master the new analytical techniques that Lucas and others were rapidly introducing, thereby neutralizing or coopting many of the natural opponents to what became modern macroeconomics. So while Romer considers the conquest of MIT by the rational-expectations revolution, despite the opposition of Robert Solow, to be evidence for the advance of economic science, I regard it as a sign of the social failure of science to discipline a regressive development driven by the elevation of technique over substance.

Krugman’s Second Best

A couple of days ago Paul Krugman discussed “Second-best Macroeconomics” on his blog. I have no real quarrel with anything he said, but I would like to amplify his discussion of what is sometimes called the problem of second-best, because I think the problem of second best has some really important implications for macroeconomics beyond the limited application of the problem that Krugman addressed. The basic idea underlying the problem of second best is not that complicated, but it has many applications, and what made the 1956 paper (“The General Theory of Second Best”) by R. G. Lipsey and Kelvin Lancaster a classic was that it showed how a number of seemingly disparate problems were really all applications of a single unifying principle. Here’s how Krugman frames his application of the second-best problem.

[T]he whole western world has spent years suffering from a severe shortfall of aggregate demand; in Europe a severe misalignment of national costs and prices has been overlaid on this aggregate problem. These aren’t hard problems to diagnose, and simple macroeconomic models — which have worked very well, although nobody believes it — tell us how to solve them. Conventional monetary policy is unavailable thanks to the zero lower bound, but fiscal policy is still on tap, as is the possibility of raising the inflation target. As for misaligned costs, that’s where exchange rate adjustments come in. So no worries: just hit the big macroeconomic That Was Easy button, and soon the troubles will be over.

Except that all the natural answers to our problems have been ruled out politically. Austerians not only block the use of fiscal policy, they drive it in the wrong direction; a rise in the inflation target is impossible given both central-banker prejudices and the power of the goldbug right. Exchange rate adjustment is blocked by the disappearance of European national currencies, plus extreme fear over technical difficulties in reintroducing them.

As a result, we’re stuck with highly problematic second-best policies like quantitative easing and internal devaluation.

I might quibble with Krugman about the quality of the available macroeconomic models, by which I am less impressed than he, but that’s really beside the point of this post, so I won’t even go there. But I can’t let the comment about the inflation target pass without observing that it’s not just “central-banker prejudices” and the “goldbug right” that are to blame for the failure to raise the inflation target; for reasons that I don’t claim to understand myself, the political consensus in both Europe and the US in favor of perpetually low or zero inflation has been supported with scarcely any less fervor by the left than the right. It’s only some eccentric economists – from diverse positions on the political spectrum – that have been making the case for inflation as a recovery strategy. So the political failure has been uniform across the political spectrum.

OK, having registered my factual disagreement with Krugman about the source of our anti-inflationary intransigence, I can now get to the main point. Here’s Krugman:

“[S]econd best” is an economic term of art. It comes from a classic 1956 paper by Lipsey and Lancaster, which showed that policies which might seem to distort markets may nonetheless help the economy if markets are already distorted by other factors. For example, suppose that a developing country’s poorly functioning capital markets are failing to channel savings into manufacturing, even though it’s a highly profitable sector. Then tariffs that protect manufacturing from foreign competition, raise profits, and therefore make more investment possible can improve economic welfare.

The problems with second best as a policy rationale are familiar. For one thing, it’s always better to address existing distortions directly, if you can — second best policies generally have undesirable side effects (e.g., protecting manufacturing from foreign competition discourages consumption of industrial goods, may reduce effective domestic competition, and so on). . . .

But here we are, with anything resembling first-best macroeconomic policy ruled out by political prejudice, and the distortions we’re trying to correct are huge — one global depression can ruin your whole day. So we have quantitative easing, which is of uncertain effectiveness, probably distorts financial markets at least a bit, and gets trashed all the time by people stressing its real or presumed faults; someone like me is then put in the position of having to defend a policy I would never have chosen if there seemed to be a viable alternative.

In a deep sense, I think the same thing is involved in trying to come up with less terrible policies in the euro area. The deal that Greece and its creditors should have reached — large-scale debt relief, primary surpluses kept small and not ramped up over time — is a far cry from what Greece should and probably would have done if it still had the drachma: big devaluation now. The only way to defend the kind of thing that was actually on the table was as the least-worst option given that the right response was ruled out.

That’s one example of a second-best problem, but it’s only one of a variety of problems, and not, it seems to me, the most macroeconomically interesting. So here’s the second-best problem that I want to discuss: given one distortion (i.e., a departure from one of the conditions for Pareto-optimality), reaching a second-best sub-optimum requires violating other – likely all the other – conditions for reaching the first-best (Pareto) optimum. The strategy for getting to the second-best suboptimum cannot be to achieve as many of the conditions for reaching the first-best optimum as possible; the conditions for reaching the second-best optimum are in general totally different from the conditions for reaching the first-best optimum.

So what’s the deeper macroeconomic significance of the second-best principle?

I would put it this way. Suppose there’s a pre-existing macroeconomic equilibrium, all necessary optimality conditions between marginal rates of substitution in production and consumption and relative prices being satisfied. Let the initial equilibrium be subjected to a macoreconomic disturbance. The disturbance will immediately affect a range — possibly all — of the individual markets, and all optimality conditions will change, so that no market will be unaffected when a new optimum is realized. But while optimality for the system as a whole requires that prices adjust in such a way that the optimality conditions are satisfied in all markets simultaneously, each price adjustment that actually occurs is a response to the conditions in a single market – the relationship between amounts demanded and supplied at the existing price. Each price adjustment being a response to a supply-demand imbalance in an individual market, there is no theory to explain how a process of price adjustment in real time will ever restore an equilibrium in which all optimality conditions are simultaneously satisfied.

Invoking a general Smithian invisible-hand theorem won’t work, because, in this context, the invisible-hand theorem tells us only that if an equilibrium price vector were reached, the system would be in an optimal state of rest with no tendency to change. The invisible-hand theorem provides no account of how the equilibrium price vector is discovered by any price-adjustment process in real time. (And even tatonnement, a non-real-time process, is not guaranteed to work as shown by the Sonnenschein-Mantel-Debreu Theorem). With price adjustment in each market entirely governed by the demand-supply imbalance in that market, market prices determined in individual markets need not ensure that all markets clear simultaneously or satisfy the optimality conditions.

Now it’s true that we have a simple theory of price adjustment for single markets: prices rise if there’s an excess demand and fall if there’s an excess supply. If demand and supply curves have normal slopes, the simple price adjustment rule moves the price toward equilibrium. But that partial-equilibriuim story is contingent on the implicit assumption that all other markets are in equilibrium. When all markets are in disequilibrium, moving toward equilibrium in one market will have repercussions on other markets, and the simple story of how price adjustment in response to a disequilibrium restores equilibrium breaks down, because market conditions in every market depend on market conditions in every other market. So unless all markets arrive at equilibrium simultaneously, there’s no guarantee that equilibrium will obtain in any of the markets. Disequilibrium in any market can mean disequilibrium in every market. And if a single market is out of kilter, the second-best, suboptimal solution for the system is totally different from the first-best solution for all markets.

In the standard microeconomics we are taught in econ 1 and econ 101, all these complications are assumed away by restricting the analysis of price adjustment to a single market. In other words, as I have pointed out in a number of previous posts (here and here), standard microeconomics is built on macroeconomic foundations, and the currently fashionable demand for macroeconomics to be microfounded turns out to be based on question-begging circular reasoning. Partial equilibrium is a wonderful pedagogical device, and it is an essential tool in applied microeconomics, but its limitations are often misunderstood or ignored.

An early macroeconomic application of the theory of second is the statement by the quintessentially orthodox pre-Keynesian Cambridge economist Frederick Lavington who wrote in his book The Trade Cycle “the inactivity of all is the cause of the inactivity of each.” Each successive departure from the conditions for second-, third-, fourth-, and eventually nth-best sub-optima has additional negative feedback effects on the rest of the economy, moving it further and further away from a Pareto-optimal equilibrium with maximum output and full employment. The fewer people that are employed, the more difficult it becomes for anyone to find employment.

This insight was actually admirably, if inexactly, expressed by Say’s Law: supply creates its own demand. The cause of the cumulative contraction of output in a depression is not, as was often suggested, that too much output had been produced, but a breakdown of coordination in which disequilibrium spreads in epidemic fashion from market to market, leaving individual transactors unable to compensate by altering the terms on which they are prepared to supply goods and services. The idea that a partial-equilibrium response, a fall in money wages, can by itself remedy a general-disequilibrium disorder is untenable. Keynes and the Keynesians were therefore completely wrong to accuse Say of committing a fallacy in diagnosing the cause of depressions. The only fallacy lay in the assumption that market adjustments would automatically ensure the restoration of something resembling full-employment equilibrium.

Price Stickiness and Macroeconomics

Noah Smith has a classically snide rejoinder to Stephen Williamson’s outrage at Noah’s Bloomberg paean to price stickiness and to the classic Ball and Maniw article on the subject, an article that provoked an embarrassingly outraged response from Robert Lucas when published over 20 years ago. I don’t know if Lucas ever got over it, but evidently Williamson hasn’t.

Now to be fair, Lucas’s outrage, though misplaced, was understandable, at least if one understands that Lucas was so offended by the ironic tone in which Ball and Mankiw cast themselves as defenders of traditional macroeconomics – including both Keynesians and Monetarists – against the onslaught of “heretics” like Lucas, Sargent, Kydland and Prescott that he just stopped reading after the first few pages and then, in a fit of righteous indignation, wrote a diatribe attacking Ball and Mankiw as religious fanatics trying to halt the progress of science as if that was the real message of the paper – not, to say the least, a very sophisticated reading of what Ball and Mankiw wrote.

While I am not hostile to the idea of price stickiness — one of the most popular posts I have written being an attempt to provide a rationale for the stylized (though controversial) fact that wages are stickier than other input, and most output, prices — it does seem to me that there is something ad hoc and superficial about the idea of price stickiness and about many explanations, including those offered by Ball and Mankiw, for price stickiness. I think that the negative reactions that price stickiness elicits from a lot of economists — and not only from Lucas and Williamson — reflect a feeling that price stickiness is not well grounded in any economic theory.

Let me offer a slightly different criticism of price stickiness as a feature of macroeconomic models, which is simply that although price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

The usual argument is that if prices are free to adjust in response to market forces, they will adjust to balance supply and demand, and an equilibrium will be restored by the automatic adjustment of prices. That is what students are taught in Econ 1. And it is an important lesson, but it is also a “partial” lesson. It is partial, because it applies to a single market that is out of equilibrium. The implicit assumption in that exercise is that nothing else is changing, which means that all other markets — well, not quite all other markets, but I will ignore that nuance – are in equilibrium. That’s what I mean when I say (as I have done before) that just as macroeconomics needs microfoundations, microeconomics needs macrofoundations.

Now it’s pretty easy to show that in a single market with an upward-sloping supply curve and a downward-sloping demand curve, that a price-adjustment rule that raises price when there’s an excess demand and reduces price when there’s an excess supply will lead to an equilibrium market price. But that simple price-adjustment rule is hard to generalize when many markets — not just one — are in disequilibrium, because reducing disequilibrium in one market may actually exacerbate disequilibrium, or create a disequilibrium that wasn’t there before, in another market. Thus, even if there is an equilibrium price vector out there, which, if it were announced to all economic agents, would sustain a general equilibrium in all markets, there is no guarantee that following the standard price-adjustment rule of raising price in markets with an excess demand and reducing price in markets with an excess supply will ultimately lead to the equilibrium price vector. Even more disturbing, the standard price-adjustment rule may not, even under a tatonnement process in which no trading is allowed at disequilibrium prices, lead to the discovery of the equilibrium price vector. Of course, in the real world trading occurs routinely at disequilibrium prices, so that the “mechanical” forces tending an economy toward equilibrium are even weaker than the standard analysis of price-adjustment would suggest.

This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists. Franklin Fisher made this point 30 years ago in an important, but insufficiently appreciated, volume Disequilibrium Foundations of Equilibrium Economics. But the idea goes back even further: to Hayek’s important work on intertemporal equilibrium, especially his classic paper “Economics and Knowledge,” formalized by Hicks in the temporary-equilibrium model described in Value and Capital.

The key point made by Hayek in this context is that there can be an intertemporal equilibrium if and only if all agents formulate their individual plans on the basis of the same expectations of future prices. If their expectations for future prices are not the same, then any plans based on incorrect price expectations will have to be revised, or abandoned altogether, as price expectations are disappointed over time. For price adjustment to lead an economy back to equilibrium, the price adjustment must converge on an equilibrium price vector and on correct price expectations. But, as Hayek understood in 1937, and as Fisher explained in a dense treatise 30 years ago, we have no economic theory that explains how such a price vector, even if it exists, is arrived at, and even under a tannonement process, much less under decentralized price setting. Pinning the blame on this vague thing called price stickiness doesn’t address the deeper underlying theoretical issue.

Of course for Lucas et al. to scoff at price stickiness on these grounds is a bit rich, because Lucas and his followers seem entirely comfortable with assuming that the equilibrium price vector is rationally expected. Indeed, rational expectation of the equilibrium price vector is held up by Lucas as precisely the microfoundation that transformed the unruly field of macroeconomics into a real science.

Traffic Jams and Multipliers

Since my previous post which I closed by quoting the abstract of Brian Arthur’s paper “Complexity Economics: A Different Framework for Economic Thought,” I have been reading his paper and some of the papers he cites, especially Magda Fontana’s paper “The Santa Fe Perspective on Economics: Emerging Patterns in the Science of Complexity,” and Mark Blaug’s paper “The Formalist Revolution of the 1950s.” The papers bring together a number of themes that I have been emphasizing in previous posts on what I consider the misguided focus of modern macroeconomics on rational-expectations equilibrium as the organizing principle of macroeconomic theory. Among these themes are the importance of coordination failures in explaining macroeconomic fluctuations, the inappropriateness of the full general-equilibrium paradigm in macroeconomics, the mistaken transformation of microfoundations from a theoretical problem to be solved into an absolute methodological requirement to be insisted upon (almost exactly analogous to the absurd transformation of the mind-body problem into a dogmatic insistence that the mind is merely a figment of our own imagination), or, stated another way, a recognition that macrofoundations are just as necessary for economics as microfoundations.

Let me quote again from Arthur’s essay; this time a beautiful passage which captures the interdependence between the micro and macro perspectives

To look at the economy, or areas within the economy, from a complexity viewpoint then would mean asking how it evolves, and this means examining in detail how individual agents’ behaviors together form some outcome and how this might in turn alter their behavior as a result. Complexity in other words asks how individual behaviors might react to the pattern they together create, and how that pattern would alter itself as a result. This is often a difficult question; we are asking how a process is created from the purposed actions of multiple agents. And so economics early in its history took a simpler approach, one more amenable to mathematical analysis. It asked not how agents’ behaviors would react to the aggregate patterns these created, but what behaviors (actions, strategies, expectations) would be upheld by — would be consistent with — the aggregate patterns these caused. It asked in other words what patterns would call for no changes in microbehavior, and would therefore be in stasis, or equilibrium. (General equilibrium theory thus asked what prices and quantities of goods produced and consumed would be consistent with — would pose no incentives for change to — the overall pattern of prices and quantities in the economy’s markets. Classical game theory asked what strategies, moves, or allocations would be consistent with — would be the best course of action for an agent (under some criterion) — given the strategies, moves, allocations his rivals might choose. And rational expectations economics asked what expectations would be consistent with — would on average be validated by — the outcomes these expectations together created.)

This equilibrium shortcut was a natural way to examine patterns in the economy and render them open to mathematical analysis. It was an understandable — even proper — way to push economics forward. And it achieved a great deal. Its central construct, general equilibrium theory, is not just mathematically elegant; in modeling the economy it re-composes it in our minds, gives us a way to picture it, a way to comprehend the economy in its wholeness. This is extremely valuable, and the same can be said for other equilibrium modelings: of the theory of the firm, of international trade, of financial markets.

But there has been a price for this equilibrium finesse. Economists have objected to it — to the neoclassical construction it has brought about — on the grounds that it posits an idealized, rationalized world that distorts reality, one whose underlying assumptions are often chosen for analytical convenience. I share these objections. Like many economists, I admire the beauty of the neoclassical economy; but for me the construct is too pure, too brittle — too bled of reality. It lives in a Platonic world of order, stasis, knowableness, and perfection. Absent from it is the ambiguous, the messy, the real. (pp. 2-3)

Later in the essay, Arthur provides a simple example of a non-equilibrium complex process: traffic flow.

A typical model would acknowledge that at close separation from cars in front, cars lower their speed, and at wide separation they raise it. A given high density of traffic of N cars per mile would imply a certain average separation, and cars would slow or accelerate to a speed that corresponds. Trivially, an equilibrium speed emerges, and if we were restricting solutions to equilibrium that is all we would see. But in practice at high density, a nonequilibrium phenomenon occurs. Some car may slow down — its driver may lose concentration or get distracted — and this might cause cars behind to slow down. This immediately compresses the flow, which causes further slowing of the cars behind. The compression propagates backwards, traffic backs up, and a jam emerges. In due course the jam clears. But notice three things. The phenomenon’s onset is spontaneous; each instance of it is unique in time of appearance, length of propagation, and time of clearing. It is therefore not easily captured by closed-form solutions, but best studied by probabilistic or statistical methods. Second, the phenomenon is temporal, it emerges or happens within time, and cannot appear if we insist on equilibrium. And third, the phenomenon occurs neither at the micro-level (individual car level) nor at the macro-level (overall flow on the road) but at a level in between — the meso-level. (p. 9)

This simple example provides an excellent insight into why macroeconomic reasoning can be led badly astray by focusing on the purely equilibrium relationships characterizing what we now think of as microfounded models. In arguing against the Keynesian multiplier analysis supposedly justifying increased government spending as a countercyclical tool, Robert Barro wrote the following in an unfortunate Wall Street Journal op-ed piece, which I have previously commented on here and here.

Keynesian economics argues that incentives and other forces in regular economics are overwhelmed, at least in recessions, by effects involving “aggregate demand.” Recipients of food stamps use their transfers to consume more. Compared to this urge, the negative effects on consumption and investment by taxpayers are viewed as weaker in magnitude, particularly when the transfers are deficit-financed.

Thus, the aggregate demand for goods rises, and businesses respond by selling more goods and then by raising production and employment. The additional wage and profit income leads to further expansions of demand and, hence, to more production and employment. As per Mr. Vilsack, the administration believes that the cumulative effect is a multiplier around two.

If valid, this result would be truly miraculous. The recipients of food stamps get, say, $1 billion but they are not the only ones who benefit. Another $1 billion appears that can make the rest of society better off. Unlike the trade-off in regular economics, that extra $1 billion is the ultimate free lunch.

How can it be right? Where was the market failure that allowed the government to improve things just by borrowing money and giving it to people? Keynes, in his “General Theory” (1936), was not so good at explaining why this worked, and subsequent generations of Keynesian economists (including my own youthful efforts) have not been more successful.

In the disequilibrium environment of a recession, it is at least possible that injecting additional spending into the economy could produce effects that a similar injection of spending, under “normal” macro conditions, would not produce, just as somehow withdrawing a few cars from a congested road could increase the average speed of all the remaining cars on the road, by a much greater amount than would withdrawing a few cars from an uncongested road. In other words, microresponses may be sensitive to macroconditions.

Franklin Fisher on the Stability(?) of General Equilibrium

The eminent Franklin Fisher, winner of the J. B. Clark Medal in 1973, a famed econometrician and antitrust economist, who was the expert economics witness for IBM in its long battle with the U. S. Department of Justice, and was later the expert witness for the Justice Department in the antitrust case against Microsoft, currently emeritus professor professor of microeconomics at MIT, visited the FTC today to give a talk about proposals the efficient sharing of water between Israel, Palestine, and Jordan. The talk was interesting and informative, but I must admit that I was more interested in Fisher’s views on the stability of general equilibrium, the subject of a monograph he wrote for the econometric society Disequilibrium Foundations of Equilibrium Economics, a book which I have not yet read, but hope to read before very long.

However, I did find a short paper by Fisher, “The Stability of General Equilibrium – What Do We Know and Why Is It Important?” (available here) which was included in a volume General Equilibrium Analysis: A Century after Walras edited by Pacal Bridel.

Fisher’s contribution was to show that the early stability analyses of general equilibrium, despite the efforts of some of the most best economists of the mid-twentieth century, e.g, Hicks, Samuelson, Arrow and Hurwicz (all Nobel Prize winners) failed to provide a useful analysis of the question whether the general equilibrium described by Walras, whose existence was first demonstrated under very restrictive assumptions by Abraham Wald, and later under more general conditions by Arrow and Debreu, is stable or not.

Although we routinely apply comparative-statics exercises to derive what Samuelson mislabeled “meaningful theorems,” meaning refutable propositions about the directional effects of a parameter change on some observable economic variable(s), such as the effect of an excise tax on the price and quantity sold of the taxed commodity, those comparative-statics exercises are predicated on the assumption that the exercise starts from an initial position of equilibrium and that the parameter change leads, in a short period of time, to a new equilibrium. But there is no theory describing the laws of motion leading from one equilibrium to another, so the whole exercise is built on the mere assumption that a general equilibrium is sufficiently stable so that the old and the new equilibria can be usefully compared. In other words, microeconomics is predicated on macroeconomic foundations, i.e., the stability of a general equilibrium. The methodological demand for microfoundations for macroeconomics is thus a massive and transparent exercise in question begging.

In his paper on the stability of general equilibrium, Fisher observes that there are four important issues to be explored by general-equilibrium theory: existence, uniqueness, optimality, and stability. Of these he considers optimality to be the most important, as it provides a justification for a capitalistic market economy. Fisher continues:

So elegant and powerful are these results, that most economists base their conclusions upon them and work in an equilibrium framework – as they do in partial equilibrium analysis. But the justification for so doing depends on the answer to the fourth question listed above, that of stability, and a favorable answer to that is by no means assured.

It is important to understand this point which is generally ignored by economists. No matter how desirable points of competitive general equilibrium may be, that is of no consequence if they cannot be reached fairly quickly or maintained thereafter, or, as might happen when a country decides to adopt free markets, there are bad consequences on the way to equilibrium.

Milton Friedman remarked to me long ago that the study of the stability of general equilibrium is unimportant, first, because it is obvious that the economy is stable, and, second, because if it isn’t stable we are wasting our time. He should have known better. In the first place, it is not at all obvious that the actual economy is stable. Apart from the lessons of the past few years, there is the fact that prices do change all the time. Beyond this, however, is a subtler and possibly more important point. Whether or not the actual economy is stable, we largely lack a convincing theory of why that should be so. Lacking such a theory, we do not have an adequate theory of value, and there is an important lacuna in the center of microeconomic theory.

Yet economists generally behave as though this problem did not exist. Perhaps the most extreme example of this is the view of the theory of Rational Expectations that any disequilibrium disappears so fast that it can be ignored. (If the 50-dollar bill were really on the sidewalk, it would be gone already.) But this simply assumes the problem away. The pursuit of profits is a major dynamic force in the competitive economy. To only look at situations where the Invisible Hand has finished its work cannot lead to a real understanding of how that work is accomplished. (p. 35)

I would also note that Fisher confirms a proposition that I have advanced a couple of times previously, namely that Walras’s Law is not generally valid except in a full general equilibrium with either a complete set of markets or correct price expectations. Outside of general equilibrium, Walras’s Law is valid only if trading is not permitted at disequilibrium prices, i.e., Walrasian tatonnement. Here’s how Fisher puts it.

In this context, it is appropriate to remark that Walras’s Law no longer holds in its original form. Instead of the sum of the money value of all excess demands over all agents being zero, it now turned out that, at any moment of time, the same sum (including the demands for shares of firms and for money) equals the difference between the total amount of dividends that households expect to receive at that time and the amount that firms expect to pay. This difference disappears in equilibrium where expectations are correct, and the classic version of Walras’s Law then holds.

Explaining the Hegemony of New Classical Economics

Simon Wren-Lewis, Robert Waldmann, and Paul Krugman have all recently devoted additional space to explaining – ruefully, for the most part – how it came about that New Classical Economics took over mainstream macroeconomics just about half a century after the Keynesian Revolution. And Mark Thoma got them all started by a complaint about the sorry state of modern macroeconomics and its failure to prevent or to cure the Little Depression.

Wren-Lewis believes that the main problem with modern macro is too much of a good thing, the good thing being microfoundations. Those microfoundations, in Wren-Lewis’s rendering, filled certain gaps in the ad hoc Keynesian expenditure functions. Although the gaps were not as serious as the New Classical School believed, adding an explicit model of intertemporal expenditure plans derived from optimization conditions and rational expectations, was, in Wren-Lewis’s estimation, an improvement on the old Keynesian theory. The improvements could have been easily assimilated into the old Keynesian theory, but weren’t because New Classicals wanted to junk, not improve, the received Keynesian theory.

Wren-Lewis believes that it is actually possible for the progeny of Keynes and the progeny of Fisher to coexist harmoniously, and despite his discomfort with the anti-Keynesian bias of modern macroeconomics, he views the current macroeconomic research program as progressive. By progressive, I interpret him to mean that macroeconomics is still generating new theoretical problems to investigate, and that attempts to solve those problems are producing a stream of interesting and useful publications – interesting and useful, that is, to other economists doing macroeconomic research. Whether the problems and their solutions are useful to anyone else is perhaps not quite so clear. But even if interest in modern macroeconomics is largely confined to practitioners of modern macroeconomics, that fact alone would not conclusively show that the research program in which they are engaged is not progressive, the progressiveness of the research program requiring no more than a sufficient number of self-selecting econ grad students, and a willingness of university departments and sources of research funding to cater to the idiosyncratic tastes of modern macroeconomists.

Robert Waldmann, unsurprisingly, takes a rather less charitable view of modern macroeconomics, focusing on its failure to discover any new, previously unknown, empirical facts about macroeconomic, or to better explain known facts than do alternative models, e.g., by more accurately predicting observed macro time-series data. By that, admittedly, demanding criterion, Waldmann finds nothing progressive in the modern macroeconomics research program.

Paul Krugman weighed in by emphasizing not only the ideological agenda behind the New Classical Revolution, but the self-interest of those involved:

Well, while the explicit message of such manifestos is intellectual – this is the only valid way to do macroeconomics – there’s also an implicit message: from now on, only my students and disciples will get jobs at good schools and publish in major journals/ And that, to an important extent, is exactly what happened; Ken Rogoff wrote about the “scars of not being able to publish stick-price papers during the years of new classical repression.” As time went on and members of the clique made up an ever-growing share of senior faculty and journal editors, the clique’s dominance became self-perpetuating – and impervious to intellectual failure.

I don’t disagree that there has been intellectual repression, and that this has made professional advancement difficult for those who don’t subscribe to the reigning macroeconomic orthodoxy, but I think that the story is more complicated than Krugman suggests. The reason I say that is because I cannot believe that the top-ranking economics departments at schools like MIT, Harvard, UC Berkeley, Princeton, and Penn, and other supposed bastions of saltwater thinking have bought into the underlying New Classical ideology. Nevertheless, microfounded DSGE models have become de rigueur for any serious academic macroeconomic theorizing, not only in the Journal of Political Economy (Chicago), but in the Quarterly Journal of Economics (Harvard), the Review of Economics and Statistics (MIT), and the American Economic Review. New Keynesians, like Simon Wren-Lewis, have made their peace with the new order, and old Keynesians have been relegated to the periphery, unable to publish in the journals that matter without observing the generally accepted (even by those who don’t subscribe to New Classical ideology) conventions of proper macroeconomic discourse.

So I don’t think that Krugman’s ideology plus self-interest story fully explains how the New Classical hegemony was achieved. What I think is missing from his story is the spurious methodological requirement of microfoundations foisted on macroeconomists in the course of the 1970s. I have discussed microfoundations in a number of earlier posts (here, here, here, here, and here) so I will try, possibly in vain, not to repeat myself too much.

The importance and desirability of microfoundations were never questioned. What, after all, was the neoclassical synthesis, if not an attempt, partly successful and partly unsuccessful, to integrate monetary theory with value theory, or macroeconomics with microeconomics? But in the early 1970s the focus of attempts, notably in the 1970 Phelps volume, to provide microfoundations changed from embedding the Keynesian system in a general-equilibrium framework, as Patinkin had done, to providing an explicit microeconomic rationale for the Keynesian idea that the labor market could not be cleared via wage adjustments.

In chapter 19 of the General Theory, Keynes struggled to come up with a convincing general explanation for the failure of nominal-wage reductions to clear the labor market. Instead, he offered an assortment of seemingly ad hoc arguments about why nominal-wage adjustments would not succeed in reducing unemployment, enabling all workers willing to work at the prevailing wage to find employment at that wage. This forced Keynesians into the awkward position of relying on an argument — wages tend to be sticky, especially in the downward direction — that was not really different from one used by the “Classical Economists” excoriated by Keynes to explain high unemployment: that rigidities in the price system – often politically imposed rigidities – prevented wage and price adjustments from equilibrating demand with supply in the textbook fashion.

These early attempts at providing microfoundations were largely exercises in applied price theory, explaining why self-interested behavior by rational workers and employers lacking perfect information about all potential jobs and all potential workers would not result in immediate price adjustments that would enable all workers to find employment at a uniform market-clearing wage. Although these largely search-theoretic models led to a more sophisticated and nuanced understanding of labor-market dynamics than economists had previously had, the models ultimately did not provide a fully satisfactory account of cyclical unemployment. But the goal of microfoundations was to explain a certain set of phenomena in the labor market that had not been seriously investigated, in the hope that price and wage stickiness could be analyzed as an economic phenomenon rather than being arbitrarily introduced into models as an ad hoc, albeit seemingly plausible, assumption.

But instead of pursuing microfoundations as an explanatory strategy, the New Classicals chose to impose it as a methodological prerequisite. A macroeconomic model was inadmissible unless it could be explicitly and formally derived from the optimizing choices of fully rational agents. Instead of trying to enrich and potentially transform the Keynesian model with a deeper analysis and understanding of the incentives and constraints under which workers and employers make decisions, the New Classicals used microfoundations as a methodological tool by which to delegitimize Keynesian models, those models being insufficiently or improperly microfounded. Instead of using microfoundations as a method by which to make macroeconomic models conform more closely to the imperfect and limited informational resources available to actual employers deciding to hire or fire employees, and actual workers deciding to accept or reject employment opportunities, the New Classicals chose to use microfoundations as a methodological justification for the extreme unrealism of the rational-expectations assumption, portraying it as nothing more than the consistent application of the rationality postulate underlying standard neoclassical price theory.

For the New Classicals, microfoundations became a reductionist crusade. There is only one kind of economics, and it is not macroeconomics. Even the idea that there could be a conceptual distinction between micro and macroeconomics was unacceptable to Robert Lucas, just as the idea that there is, or could be, a mind not reducible to the brain is unacceptable to some deranged neuroscientists. No science, not even chemistry, has been reduced to physics. Were it ever to be accomplished, the reduction of chemistry to physics would be a great scientific achievement. Some parts of chemistry have been reduced to physics, which is a good thing, especially when doing so actually enhances our understanding of the chemical process and results in an improved, or more exact, restatement of the relevant chemical laws. But it would be absurd and preposterous simply to reject, on supposed methodological principle, those parts of chemistry that have not been reduced to physics. And how much more absurd would it be to reject higher-level sciences, like biology and ecology, for no other reason than that they have not been reduced to physics.

But reductionism is what modern macroeconomics, under the New Classical hegemony, insists on. No exceptions allowed; don’t even ask. Meekly and unreflectively, modern macroeconomics has succumbed to the absurd and arrogant methodological authoritarianism of the New Classical Revolution. What an embarrassment.

UPDATE (11:43 AM EDST): I made some minor editorial revisions to eliminate some grammatical errors and misplaced or superfluous words.

Methodological Arrogance

A few weeks ago, I posted a somewhat critical review of Kartik Athreya’s new book Big Ideas in Macroeconomics. In quoting a passage from chapter 4 in which Kartik defended the rational-expectations axiom on the grounds that it protects the public from economists who, if left unconstrained by the discipline of rational expectations, could use expectational assumptions to generate whatever results they wanted, I suggested that this sort of reasoning in defense of the rational-expectations axiom betrayed what I called the “methodological arrogance” of modern macroeconomics which has, to a large extent, succeeded in imposing that axiom on all macroeconomic models. In his comment responding to my criticisms, Kartik made good-natured reference in passing to my charge of “methodological arrogance,” without substantively engaging with the charge. And in a post about the early reviews of Kartik’s book, Steve Williamson, while crediting me for at least reading the book before commenting on it, registered puzzlement at what I meant by “methodological arrogance.”

Actually, I realized when writing that post that I was not being entirely clear about what “methodological arrogance” meant, but I thought that my somewhat tongue-in-cheek reference to the duty of modern macroeconomists “to ban such models from polite discourse — certainly from the leading economics journals — lest the public be tainted by economists who might otherwise dare to abuse their models by making illicit assumptions about expectations formation and equilibrium concepts” was sufficiently suggestive not to require elaboration, especially after having devoted several earlier posts to criticisms of the methodology of modern macroeconomics (e.g., here, here, and here). That was a misjudgment.

So let me try to explain what I mean by methodological arrogance, which is not the quite the same as, but is closely related to, methodological authoritarianism. I will do so by referring to the long introductory essay (“A Realist View of Logic, Physics, and History”) that Karl Popper contributed to a book The Self and Its Brain co-authored with neuroscientist John Eccles. The chief aim of the essay was to argue that the universe is not fully determined, but evolves, producing new, emergent, phenomena not originally extant in the universe, such as the higher elements, life, consciousness, language, science and all other products of human creativity, which in turn interact with the universe, in fundamentally unpredictable ways. Popper regards consciousness as a real phenomenon that cannot be reduced to or explained by purely physical causes. Though he makes only brief passing reference to the social sciences, Popper’s criticisms of reductionism are directly applicable to the microfoundations program of modern macroeconomics, and so I think it will be useful to quote what he wrote at some length.

Against the acceptance of the view of emergent evolution there is a strong intuitive prejudice. It is the intuition that, if the universe consists of atoms or elementary particles, so that all things are structures of such particles, then every event in the universe ought to be explicable, and in principle predictable, in terms of particle structure and of particle interaction.

Notice how easy it would be rephrase this statement as a statement about microfoundations:

Against the acceptance of the view that there are macroeconomic phenomena, there is a strong intuitive prejudice. It is the intuition that, if the macroeconomy consists of independent agents, so that all macroeconomic phenomena are the result of decisions made by independent agents, then every macreconomic event ought to be explicable, and in principle predictable, in terms of the decisions of individual agents and their interactions.

Popper continues:

Thus we are led to what has been called the programme of reductionism [microfoundations]. In order to discuss it I shall make use of the following Table

(12) Level of ecosystems

(11) Level of populations of metazoan and plants

(10) Level of metezoa and multicellular plants

(9) Level of tissues and organs (and of sponges?)

(8) Level of populations of unicellular organisms

(7) Level of cells and of unicellular organisms

(6) Level of organelles (and perhaps of viruses)

(5) Liquids and solids (crystals)

(4) Molecules

(3) Atoms

(2) Elementary particles

(1) Sub-elementary particles

(0) Unknown sub-sub-elementary particles?

The reductionist idea behind this table is that the events or things on each level should be explained in terms of the lower levels. . . .

This reductionist idea is interesting and important; and whenever we can explain entities and events on a higher level by those of a lower level, we can speak of a great scientific success, and can say that we have added much to our understanding of the higher level. As a research programme, reductionism is not only important, but it is part of the programme of science whose aim is to explain and to understand.

So far so good. Reductionism certainly has its place. So do microfoundations. Whenever we can take an observation and explain it in terms of its constituent elements, we have accomplished something important. We have made scientific progress.

But Popper goes on to voice a cautionary note. There may be, and probably are, strict, perhaps insuperable, limits to how far higher-level phenomena can be reduced to (explained by) lower-level phenomena.

[E]ven the often referred to reduction of chemistry to physics, important as it is, is far from complete, and very possibly incompletable. . . . [W]e are far removed indeed from being able to claim that all, or most, properties of chemical compounds can be reduced to atomic theory. . . . In fact, the five lower levels of [our] Table . . . can be used to show that we have reason to regard this kind of intuitive reduction programme as clashing with some results of modern physics.

For what [our] Table suggests may be characterized as the principle of “upward causation.” This is the principle that causation can be traced in our Table . . . . from a lower level to a higher level, but not vice versa; that what happens on a higher level can be explained in terms of the next lower level, and ultimately in terms of elementary particles and the relevant physical laws. It appears at first that the higher levels cannot act on the lower ones.

But the idea of particle-to-particle or atom-to-atom interaction has been superseded by physics itself. A diffraction grating or a crystal (belonging to level (5) of our Table . . .) is a spatially very extended complex (and periodic) structure of billions of molecules; but it interacts as a whole extended periodic structure with the photons or the particles of a beam of photons or particles. Thus we have here an important example of “downward causation“. . . . That is to say, the whole, the macro structure, may, qua whole, act upon a photon or an elementary particle or an atom. . . .

Other physical examples of downward causation – of macroscopic structures on level (5) acting upon elementary particles or photons on level (1) – are lasers, masers, and holograms. And there are also many other macro structures which are examples of downward causation: every simple arrangement of negative feedback, such as a steam engine governor, is a macroscopic structure that regulates lower level events, such as the flow of the molecules that constitute the steam. Downward causation is of course important in all tools and machines which are designed for sompe purpose. When we use a wedge, for example, we do not arrange for the action of its elementary particles, but we use a structure, relying on it ot guide the actions of its constituent elementary particles to act, in concert, so as to achieve the desired result.

Stars are undersigned, but one may look at them as undersigned “machines” for putting the atoms and elementary particles in their central region under terrific gravitational pressure, with the (undersigned) result that some atomic nuclei fuse and form the nuclei of heavier elements; an excellent example of downward causation,of the action of the whole structure upon its constituent particles.

(Stars, incidentally, are good examples of the general rule that things are processes. Also, they illustrate the mistake of distinguishing between “wholes” – which are “more than the sums of their parts” – and “mere heaps”: a star is, in a sense, a “mere” accumulation, a “mere heap” of its constituent atoms. Yet it is a process – a dynamic structure. Its stability depends upon the dynamic equilibrium between its gravitational pressure, due to its sheer bulk, and the repulsive forces between its closely packed elementary particles. If the latter are excessive, the star explodes, If they are smaller than the gravitational pressure, it collapses into a “black hole.”

The most interesting examples of downward causation are to be found in organisms and in their ecological systems, and in societies of organisms [my emphasis]. A society may continue to function even though many of its members die; but a strike in an essential industry, such as the supply of electricity, may cause great suffering to many individual people. .. . I believe that these examples make the existence of downward causation obvious; and they make the complete success of any reductionist programme at least problematic.

I was very glad when I recently found this discussion of reductionism by Popper in a book that I had not opened for maybe 40 years, because it supports an argument that I have been making on this blog against the microfoundations program in macroeconomics: that as much as macroeconomics requires microfoundations, microeconomics also requires macrofoundations. Here is how I put a little over a year ago:

In fact, the standard comparative-statics propositions of microeconomics are also based on the assumption of the existence of a unique stable general equilibrium. Those comparative-statics propositions about the signs of the derivatives of various endogenous variables (price, quantity demanded, quantity supplied, etc.) with respect to various parameters of a microeconomic model involve comparisons between equilibrium values of the relevant variables before and after the posited parametric changes. All such comparative-statics results involve a ceteris-paribus assumption, conditional on the existence of a unique stable general equilibrium which serves as the starting and ending point (after adjustment to the parameter change) of the exercise, thereby isolating the purely hypothetical effect of a parameter change. Thus, as much as macroeconomics may require microfoundations, microeconomics is no less in need of macrofoundations, i.e., the existence of a unique stable general equilibrium, absent which a comparative-statics exercise would be meaningless, because the ceteris-paribus assumption could not otherwise be maintained. To assert that macroeconomics is impossible without microfoundations is therefore to reason in a circle, the empirically relevant propositions of microeconomics being predicated on the existence of a unique stable general equilibrium. But it is precisely the putative failure of a unique stable intertemporal general equilibrium to be attained, or to serve as a powerful attractor to economic variables, that provides the rationale for the existence of a field called macroeconomics.

And more recently, I put it this way:

The microeconomic theory of price adjustment is a theory of price adjustment in a single market. It is a theory in which, implicitly, all prices and quantities, but a single price-quantity pair are in equilibrium. Equilibrium in that single market is rapidly restored by price and quantity adjustment in that single market. That is why I have said that microeconomics rests on a macroeconomic foundation, and that is why it is illusory to imagine that macroeconomics can be logically derived from microfoundations. Microfoundations, insofar as they explain how prices adjust, are themselves founded on the existence of a macroeconomic equilibrium. Founding macroeconomics on microfoundations is just a form of bootstrapping.

So I think that my criticism of the microfoundations project exactly captures the gist of Popper’s criticism of reductionism. Popper extended his criticism of a certain form of reductionism, which he called “radical materialism or radical physicalism” in later passage in the same essay that is also worth quoting:

Radical materialism or radical physicalism is certainly a selfconsistent position. Fir it is a view of the universe which, as far as we know, was adequate once; that is, before the emergence of life and consciousness. . . .

What speaks in favour of radical materialism or radical physicalism is, of course, that it offers us a simple vision of a simple universe, and this looks attractive just because, in science, we search for simple theories. However, I think that it is important that we note that there are two different ways by which we can search for simplicity. They may be called, briefly, philosophical reduction and scientific reduction. The former is characterized by an attempt to provide bold and testable theories of high explanatory power. I believe that the latter is an extremely valuable and worthwhile method; while the former is of value only if we have good reasons to assume that it corresponds to the facts about the universe.

Indeed, the demand for simplicity in the sense of philosophical rather than scientific reduction may actually be damaging. For even in order to attempt scientific reduction, it is necessary for us to get a full grasp of the problem to be solved, and it is therefore vitally important that interesting problems are not “explained away” by philosophical analysis. If, say, more than one factor is responsible for some effect, it is important that we do not pre-empt the scientific judgment: there is always the danger that we might refuse to admit any ideas other than the ones we appear to have at hand: explaining away, or belittling the problem. The danger is increased if we try to settle the matter in advance by philosophical reduction. Philosophical reduction also makes us blind to the significance of scientific reduction.

Popper adds the following footnote about the difference between philosophic and scientific reduction.

Consider, for example, what a dogmatic philosophical reductionist of a mechanistic disposition (or even a quantum-mechanistic disposition) might have done in the face of the problem of the chemical bond. The actual reduction, so far as it goes, of the theory of the hydrogen bond to quantum mechanics is far more interesting than the philosophical assertion that such a reduction will one be achieved.

What modern macroeconomics now offers is largely an array of models simplified sufficiently so that they are solvable using the techniques of dynamic optimization. Dynamic optimization by individual agents — the microfoundations of modern macro — makes sense only in the context of an intertemporal equilibrium. But it is just the possibility that intertemporal equilibrium may not obtain that, to some of us at least, makes macroeconomics interesting and relevant. As the great Cambridge economist, Frederick Lavington, anticipating Popper in grasping the possibility of downward causation, put it so well, “the inactivity of all is the cause of the inactivity of each.”

So what do I mean by methodological arrogance? I mean an attitude that invokes microfoundations as a methodological principle — philosophical reductionism in Popper’s terminology — while dismissing non-microfounded macromodels as unscientific. To be sure, the progress of science may enable us to reformulate (and perhaps improve) explanations of certain higher-level phenomena by expressing those relationships in terms of lower-level concepts. That is what Popper calls scientific reduction. But scientific reduction is very different from rejecting, on methodological principle, any explanation not expressed in terms of more basic concepts.

And whenever macrotheory seems inconsistent with microtheory, the inconsistency poses a problem to be solved. Solving the problem will advance our understanding. But simply to reject the macrotheory on methodological principle without evidence that the microfounded theory gives a better explanation of the observed phenomena than the non-microfounded macrotheory (and especially when the evidence strongly indicates the opposite) is arrogant. Microfoundations for macroeconomics should result from progress in economic theory, not from a dubious methodological precept.

Let me quote Popper again (this time from his book Objective Knowledge) about the difference between scientific and philosophical reduction, addressing the denial by physicalists that that there is such a thing as consciousness, a denial based on their belief that all supposedly mental phenomena can and will ultimately be reduced to purely physical phenomena

[P]hilosophical speculations of a materialistic or physicalistic character are very interesting, and may even be able to point the way to a successful scientific reduction. But they should be frankly tentative theories. . . . Some physicalists do not, however, consider their theories as tentative, but as proposals to express everything in physicalist language; and they think these proposals have much in their favour because they are undoubtedly convenient: inconvenient problems such as the body-mind problem do indeed, most conveniently, disappear. So these physicalists think that there can be no doubt that these problems should be eliminated as pseudo-problems. (p. 293)

One could easily substitute “methodological speculations about macroeconomics” for “philosophical speculations of a materialistic or physicalistic character” in the first sentence. And in the third sentence one could substitute “advocates of microfounding all macroeconomic theories” for “physicalists,” “microeconomic” for “physicalist,” and “Phillips Curve” or “involuntary unemployment” for “body-mind problem.”

So, yes, I think it is arrogant to think that you can settle an argument by forcing the other side to use only those terms that you approve of.

What Does “Keynesian” Mean?

Last week Simon Wren-Lewis wrote a really interesting post on his blog trying to find the right labels with which to identify macroeconomists. Simon, rather disarmingly, starts by admitting the ultimate futility of assigning people labels; reality is just too complicated to conform to the labels that we invent to help ourselves make sense of reality. A good label can provide us with a handle with which to gain a better grasp on a messy set of observations, but it is not the reality. And if you come up with one label, I may counter with a different one. Who’s to say which label is better?

At any rate, as I read through Simon’s post I found myself alternately nodding my head in agreement and shaking my head in disagreement. So staying in the spirit of fun in which Simon wrote his post, I will provide a commentary on his labels and other pronouncements. If the comments are weighted on the side of disagreement, well, that’s what makes blogging fun, n’est-ce pas?

Simon divides academic researchers into two groups (mainstream and heterodox) and macroeconomic policy into two approaches (Keynesian and anti-Keynesian). He then offers the following comment on the meaning of the label Keynesian.

Just think about the label Keynesian. Any sensible definition would involve the words sticky prices and aggregate demand. Yet there are still some economists (generally not academics) who think Keynesian means believing fiscal rather than monetary policy should be used to stabilise demand. Fifty years ago maybe, but no longer. Even worse are non-economists who think being a Keynesian means believing in market imperfections, government intervention in general and a mixed economy. (If you do not believe this happens, look at the definition in Wikipedia.)

Well, as I pointed out in a recent post, there is nothing peculiarly Keynesian about the assumption of sticky prices, especially not as a necessary condition for an output gap and involuntary unemployment. So if Simon is going to have to work harder to justify his distinction between Keynesian and anti-Keynesian. In a comment on Simon’s blog, Nick Rowe pointed out just this problem, asking in particular why Simon could not substitute a Monetarist/anti-Monetarist dichotomy for the Keynesian/anti-Keynesian one.

The story gets more complicated in Simon’s next paragraph in which he describes his dichotomy of academic research into mainstream and heterodox.

Thanks to the microfoundations revolution in macro, mainstream macroeconomists speak the same language. I can go to a seminar that involves an RBC model with flexible prices and no involuntary unemployment and still contribute and possibly learn something. Equally an economist like John Cochrane can and does engage in meaningful discussions of New Keynesian theory (pdf).

In other words, the range of acceptable macroeconomic models has been drastically narrowed. Unless it is microfounded in a dynamic stochastic general equilibrium model, a model does not qualify as “mainstream.” This notion of microfoundation is certainly not what Edmund Phelps meant by “microeconomic foundations” when he edited his famous volume Microeconomic Foundations of Employment and Inflation Theory, which contained, among others, Alchian’s classic paper on search costs and unemployment and a paper by the then not so well-known Robert Lucas and his early collaborator Leonard Rapping. Nevertheless, in the current consensus, it is apparently the New Classicals that determine what kind of model is acceptable, while New Keynesians are allowed to make whatever adjustments, mainly sticky wages, they need to derive Keynesian policy recommendations. Anyone who doesn’t go along with this bargain is excluded from the mainstream. Simon may not be happy with this state of affairs, but he seems to have made peace with it without undue discomfort.

Now many mainstream macroeconomists, myself included, can be pretty critical of the limitations that this programme can place on economic thinking, particularly if it is taken too literally by microfoundations purists. But like it or not, that is how most macro research is done nowadays in the mainstream, and I see no sign of this changing anytime soon. (Paul Krugman discusses some reasons why here.) My own view is that I would like to see more tolerance and a greater variety of modelling approaches, but a pragmatic microfoundations macro will and should remain the major academic research paradigm.

Thus, within the mainstream, there is no basic difference in how to create a macroeconomic model. The difference is just in how to tweak the model in order to derive the desired policy implication.

When it comes to macroeconomic policy, and keeping to the different language idea, the only significant division I see is between the mainstream macro practiced by most economists, including those in most central banks, and anti-Keynesians. By anti-Keynesian I mean those who deny the potential for aggregate demand to influence output and unemployment in the short term.

So, even though New Keynesians have learned how to speak the language of New Classicals, New Keynesians can console themselves in retaining the upper hand in policy discussions. Which is why in policy terms, Simon chooses a label that is at least suggestive of a certain Keynesian primacy, the other side being defined in terms of their opposition to Keynesian policy. Half apologetically, Simon then asks: “Why do I use the term anti-Keynesian rather than, say, New Classical?” After all, it’s the New Classical model that’s being tweaked. Simon responds:

Partly because New Keynesian economics essentially just augments New Classical macroeconomics with sticky prices. But also because as far as I can see what holds anti-Keynesians together isn’t some coherent and realistic view of the world, but instead a dislike of what taking aggregate demand seriously implies.

This explanation really annoyed Steve Williamson who commented on Simon’s blog as follows:

Part of what defines a Keynesian (new or old), is that a Keynesian thinks that his or her views are “mainstream,” and that the rest of macroeconomic thought is defined relative to what Keynesians think – Keynesians reside at the center of the universe, and everything else revolves around them.

Simon goes on to explain what he means by the incoherence of the anti-Keynesian view of the world, pointing out that the Pigou Effect, which supposedly invalidated Keynes’s argument that perfect wage and price flexibility would not eventually restore full employment to an economy operating at less than full employment, has itself been shown not to be valid. And then Simon invokes that old standby Say’s Law.

Second, the evidence that prices are not flexible is so overwhelming that you need something else to drive you to ignore this evidence. Or to put it another way, you need something pretty strong for politicians or economists to make the ‘schoolboy error’ that is Says Law, which is why I think the basis of the anti-Keynesian view is essentially ideological.

Here, I think, Simon is missing something important. It was a mistake on Keynes’s part to focus on Say’s Law as the epitome of everything wrong with “classical economics.” Actually Say’s Law is a description of what happens in an economy when trading takes place at disequilibrium prices. At disequilibrium prices, potential gains from trade are left on the table. Not only are they left on the table, but the effects can be cumulative, because the failure to supply implies a further failure to demand. The Keynesian spending multiplier is the other side of the coin of the supply-side contraction envisioned by Say. Even infinite wage and price flexibility may not help an economy in which a lot of trade is occurring at disequilibrium prices.

The microeconomic theory of price adjustment is a theory of price adjustment in a single market. It is a theory in which, implicitly, all prices and quantities, but a single price-quantity pair are in equilibrium. Equilibrium in that single market is rapidly restored by price and quantity adjustment in that single market. That is why I have said that microeconomics rests on a macroeconomic foundation, and that is why it is illusory to imagine that macroeconomics can be logically derived from microfoundations. Microfoundations, insofar as they explain how prices adjust, are themselves founded on the existence of a macroeconomic equilibrium. Founding macroeconomics on microfoundations is just a form of bootstrapping.

If there is widespread unemployment, it may indeed be that wages are too high, and that a reduction in wages would restore equilibrium. But there is no general presumption that unemployment will be cured by a reduction in wages. Unemployment may be the result of a more general dysfunction in which all prices are away from their equilibrium levels, in which case no adjustment of the wage would solve the problem, so that there is no presumption that the current wage exceeds the full-equilibrium wage. This, by the way, seems to me to be nothing more than a straightforward implication of the Lipsey-Lancaster theory of second best.

Macroeconomic Science and Meaningful Theorems

Greg Hill has a terrific post on his blog, providing the coup de grace to Stephen Williamson’s attempt to show that the way to increase inflation is for the Fed to raise its Federal Funds rate target. Williamson’s problem, Hill points out is that he attempts to derive his results from relationships that exist in equilibrium. But equilibrium relationships in and of themselves are sterile. What we care about is how a system responds to some change that disturbs a pre-existing equilibrium.

Williamson acknowledged that “the stories about convergence to competitive equilibrium – the Walrasian auctioneer, learning – are indeed just stories . . . [they] come from outside the model” (here).  And, finally, this: “Telling stories outside of the model we have written down opens up the possibility for cheating. If everything is up front – written down in terms of explicit mathematics – then we have to be honest. We’re not doing critical theory here – we’re doing economics, and we want to be treated seriously by other scientists.”

This self-conscious scientism on Williamson’s part is not just annoyingly self-congratulatory. “Hey, look at me! I can write down mathematical models, so I’m a scientist, just like Richard Feynman.” It’s wildly inaccurate, because the mere statement of equilibrium conditions is theoretically vacuous. Back to Greg:

The most disconcerting thing about Professor Williamson’s justification of “scientific economics” isn’t its uncritical “scientism,” nor is it his defense of mathematical modeling. On the contrary, the most troubling thing is Williamson’s acknowledgement-cum-proclamation that his models, like many others, assume that markets are always in equilibrium.

Why is this assumption a problem?  Because, as Arrow, Debreu, and others demonstrated a half-century ago, the conditions required for general equilibrium are unimaginably stringent.  And no one who’s not already ensconced within Williamson’s camp is likely to characterize real-world economies as always being in equilibrium or quickly converging upon it.  Thus, when Williamson responds to a question about this point with, “Much of economics is competitive equilibrium, so if this is a problem for me, it’s a problem for most of the profession,” I’m inclined to reply, “Yes, Professor, that’s precisely the point!”

Greg proceeds to explain that the Walrasian general equilibrium model involves the critical assumption (implemented by the convenient fiction of an auctioneer who announces prices and computes supply and demand at that prices before allowing trade to take place) that no trading takes place except at the equilibrium price vector (where the number of elements in the vector equals the number of prices in the economy). Without an auctioneer there is no way to ensure that the equilibrium price vector, even if it exists, will ever be found.

Franklin Fisher has shown that decisions made out of equilibrium will only converge to equilibrium under highly restrictive conditions (in particular, “no favorable surprises,” i.e., all “sudden changes in expectations are disappointing”).  And since Fisher has, in fact, written down “the explicit mathematics” leading to this conclusion, mustn’t we conclude that the economists who assume that markets are always in equilibrium are really the ones who are “cheating”?

An alternative general equilibrium story is that learning takes place allowing the economy to converge on a general equilibrium time path over time, but Greg easily disposes of that story as well.

[T]he learning narrative also harbors massive problems, which come out clearly when viewed against the background of the Arrow-Debreu idealized general equilibrium construction, which includes a complete set of intertemporal markets in contingent claims.  In the world of Arrow-Debreu, every price in every possible state of nature is known at the moment when everyone’s once-and-for-all commitments are made.  Nature then unfolds – her succession of states is revealed – and resources are exchanged in accordance with the (contractual) commitments undertaken “at the beginning.”

In real-world economies, these intertemporal markets are woefully incomplete, so there’s trading at every date, and a “sequence economy” takes the place of Arrow and Debreu’s timeless general equilibrium.  In a sequence economy, buyers and sellers must act on their expectations of future events and the prices that will prevail in light of these outcomes.  In the limiting case of rational expectations, all agents correctly forecast the equilibrium prices associated with every possible state of nature, and no one’s expectations are disappointed. 

Unfortunately, the notion that rational expectations about future prices can replace the complete menu of Arrow-Debreu prices is hard to swallow.  Frank Hahn, who co-authored “General Competitive Analysis” with Kenneth Arrow (1972), could not begin to swallow it, and, in his disgorgement, proceeded to describe in excruciating detail why the assumption of rational expectations isn’t up to the job (here).  And incomplete markets are, of course, but one departure from Arrow-Debreu.  In fact, there are so many more that Hahn came to ridicule the approach of sweeping them all aside, and “simply supposing the economy to be in equilibrium at every moment of time.”

Just to pile on, I would also point out that any general equilibrium model assumes that there is a given state of knowledge that is available to all traders collectively, but not necessarily to each trader. In this context, learning means that traders gradually learn what the pre-existing facts are. But in the real world, knowledge increases and evolves through time. As knowledge changes, capital — both human and physical — embodying that knowledge becomes obsolete and has to be replaced or upgraded, at unpredictable moments of time, because it is the nature of new knowledge that it cannot be predicted. The concept of learning incorporated in these sorts of general equilibrium constructs is a travesty of the kind of learning that characterizes the growth of knowledge in the real world. The implications for the existence of a general equilibrium model in a world in which knowledge grows in an unpredictable way are devastating.

Greg aptly sums up the absurdity of using general equilibrium theory (the description of a decentralized economy in which the component parts are in a state of perfect coordination) as the microfoundation for macroeconomics (the study of decentralized economies that are less than perfectly coordinated) as follows:

What’s the use of “general competitive equilibrium” if it can’t furnish a sturdy, albeit “external,” foundation for the kind of modeling done by Professor Williamson, et al?  Well, there are lots of other uses, but in the context of this discussion, perhaps the most important insight to be gleaned is this: Every aspect of a real economy that Keynes thought important is missing from Arrow and Debreu’s marvelous construction.  Perhaps this is why Axel Leijonhufvud, in reviewing a state-of-the-art New Keynesian DSGE model here, wrote, “It makes me feel transported into a Wonderland of long ago – to a time before macroeconomics was invented.”

To which I would just add that nearly 70 years ago, Paul Samuelson published his magnificent Foundations of Economic Analysis, a work undoubtedly read and mastered by Williamson. But the central contribution of the Foundations was the distinction between equilibrium conditions and what Samuelson (owing to the influence of the still fashionable philosophical school called logical positivism) mislabeled meaningful theorems. A mere equilibrium condition is not the same as a meaningful theorem, but Samuelson showed how a meaningful theorem can be mathematically derived from an equilibrium condition. The link between equilibrium conditions and meaningful theorems was the foundation of economic analysis. Without a mathematical connection between equilibrium conditions and meaningful theorems analogous to the one provided by Samuelson in the Foundations, claims to have provided microfoundations for macroeconomics are, at best, premature.

The Microfoundations Wars Continue

I see belatedly that the battle over microfoundations continues on the blogosphere, with Paul Krugman, Noah Smith, Adam Posen, and Nick Rowe all challenging the microfoundations position, while Tony Yates and Stephen Williamson defend it with Simon Wren-Lewis trying to serve as a peacemaker of sorts. I agree with most of the criticisms, but what I found most striking was the defense of microfoundations offered by Tony Yates, who expresses the mentality of the microfoundations school so well that I thought that some further commentary on his post would be worthwhile.

Yates’s post was prompted by a Twitter exchange between Yates and Adam Posen after Posen tweeted that microfoundations have no merit, an exaggeration no doubt, but not an unreasonable one. Noah Smith chimed in with a challenge to Yates to defend the proposition that microfoundations do have merit. Hence, the title (“Why Microfoundations Have Merit.”) of Yates’s post. What really caught my attention in Yates’s post is that, in trying to defend the proposition that microfounded models do have merit, Yates offers the following methodological, or perhaps aesthetic, pronouncement .

The merit in any economic thinking or knowledge must lie in it at some point producing an insight, a prediction, a prediction of the consequence of a policy action, that helps someone, or a government, or a society to make their lives better.

Microfounded models are models which tell an explicit story about what the people, firms, and large agents in a model do, and why.  What do they want to achieve, what constraints do they face in going about it?  My own position is that these are the ONLY models that have anything genuinely economic to say about anything.  It’s contestable whether they have any merit or not.

Paraphrasing, I would say that Yates defines merit as a useful insight or prediction into the way the world works. Fair enough. He then defines microfounded models as those models that tell an explicit story about what the agents populating the model are trying to do and the resulting outcomes of their efforts. This strikes me as a definition that includes more than just microfounded models, but let that pass, at least for the moment. Then comes the key point. These models “are the ONLY models that have anything genuinely economic to say about anything.” A breathtaking claim.

In other words, Yates believes that unless an insight, a proposition, or a conjecture, can be logically deduced from microfoundations, it is not economics. So whatever the merits of microfounded models, a non-microfounded model is not, as a matter of principle, an economic model. Talk about methodological authoritarianism.

Having established, to his own satisfaction at any rate, that only microfounded models have a legitimate claim to be considered economic, Yates defends the claim that microfounded models have merit by citing the Lucas critique as an early example of a meritorious insight derived from the “microfoundations project.” Now there is something a bit odd about this claim, because Yates neglects to mention that the Lucas critique, as Lucas himself acknowledged, had been anticipated by earlier economists, including both Keynes and Tinbergen. So if the microfoundations project does indeed have merit, the example chosen to illustrate that merit does nothing to show that the merit is in any way peculiar to the microfoundations project. It is also bears repeating (see my earlier post on the Lucas critique) that the Lucas critique only tells us about steady states, so it provides no useful information, insight, prediction or guidance about using monetary policy to speed up the recovery to a new steady state. So we should be careful not to attribute more merit to the Lucas critique than it actually possesses.

To be sure, in his Twitter exchange with Adam Posen, Yates mentioned several other meritorious contributions from the microfoundations project, each of which Posen rejected because the merit of those contributions lies in the intuition behind the one line idea. To which Yates responded:

This statement is highly perplexing to me.  Economic ideas are claims about what people and firms and governments do, and why, and what unfolds as a consequence.  The models are the ideas.  ‘Intuition’, the verbal counterpart to the models, are not separate things, the origins of the models.  They are utterances to ourselves that arise from us comprehending the logical object of the model, in the same way that our account to ourselves of an equation arises from the model.  One could make an argument for the separateness of ‘intuition’ at best, I think, as classifying it in some cases to be a conjecture about what a possible economic world [a microfounded model] would look like.  Intuition as story-telling to oneself can sometimes be a good check on whether what we have done is nonsense.  But not always.  Lots of results are not immediately intuitive.  That’s not a reason to dismiss it.  (Just like most of modern physics is not intuitive.)  Just a reason to have another think and read through your code carefully.

And Yates’s response is highly perplexing to me. An economic model is usually the product of some thought process intended to construct a coherent model from some mental raw materials (ideas) and resources (knowledge and techniques). The thought process is an attempt to embody some idea or ideas about a posited causal mechanism or about a posited mutual interdependency among variables of interest. The intuition is the idea or insight that some such causal mechanism or mutual interdependency exists. A model is one particular implementation (out of many other possible implementations) of the idea in a way that allows further implications of the idea to be deduced, thereby achieving an enhanced and deeper understanding of the original insight. The “microfoundations project” does not directly determine what kinds of ideas can be modeled, but it does require that models have certain properties to be considered acceptable implementations of any idea. In particular the model must incorporate a dynamic stochastic general equilibrium system with rational expectations and a unique equilibrium. Ideas not tractable given those modeling constraints are excluded. Posen’s point, it seems to me, is not that no worthwhile, meritorious ideas have been modeled within the modeling constraints imposed by the microfoundations project, but that the microfoundations project has done nothing to create or propagate those ideas; it has just forced those ideas to be implemented within the template of the microfoundations project.

None of the characteristic properties of the microfoundations project are assumptions for which there is compelling empirical or theoretical justification. We know how to prove the existence of a general equilibrium for economic models populated by agents satisfying certain rationality assumptions (assumptions for which there is no compelling a priori argument and whose primary justifications are tractability and the accuracy of the empirical implications deduced from them), but the conditions for a unique general equilibrium are way more stringent than the standard convexity assumptions required to prove existence. Moreover, even given the existence of a unique general equilibrium, there is no proof that an economy not in general equilibrium will reach the general equilibrium under the standard rules of price adjustment. Nor is there any empirical evidence to suggest that actual economies are in any sense in a general equilibrium, though one might reasonably suppose that actual economies are from time to time in the neighborhood of a general equilibrium. The rationality of expectations is in one sense an entirely ad hoc assumption, though an inconsistency between the predictions of a model, under the assumption of rational expectations, with the rationally expectations of the agents in the model is surely a sign that there is a problem in the structure of the model. But just because rational expectations can be used to check for latent design flaws in a model, it does not follow that assuming rational expectations leads to empirical implications that are generally, or even occasionally, empirically valid.

Thus, the key assumptions of microfounded models are not logically entailed by any deep axioms; they are imposed by methodological fiat, a philosophically and pragmatically unfounded insistence that certain modeling conventions be adhered to in order to count as “scientific.” Now it would be one thing if these modeling conventions were generating new, previously unknown, empirical relationships or generating more accurate predictions than those generated by non-microfounded models, but evidence that the predictions of microfounded models are better than the predictions of non-microfounded models is notably lacking. Indeed, Carlaw and Lipsey have shown that micro-founded models generate predictions that are less accurate than those generated by non-micofounded models. If microfounded theories represent scientific progress, they ought to be producing an increase, not a decrease, in explanatory power.

The microfoundations project is predicated on a gigantic leap of faith that the existing economy has an underlying structure that corresponds closely enough to the assumptions of the Arrow-Debreu model, suitably adjusted for stochastic elements and a variety of frictions (e.g., Calvo pricing) that may be introduced into the models depending on the modeler’s judgment about what constitutes an allowable friction. This is classic question-begging with a vengeance: arriving at a conclusion by assuming what needs to be proved. Such question begging is not necessarily illegitimate; every research program is based on some degree of faith or optimism that results not yet in hand will justify the effort required to generate those results. What is not legitimate is the claim that ONLY the models based on such question-begging assumptions are genuinely scientific.

This question-begging mentality masquerading as science is actually not unique to the microfoundations school. It is not uncommon among those with an exaggerated belief in the powers of science, a mentality that Hayek called scientism. It is akin to physicalism, the philosophical doctrine that all phenomena are physical. According to physicalism, there are no mental phenomena. What we perceive as mental phenomena, e.g., consciousness, is not real, but an illusion. Our mental states are really nothing but physical states. I do not say that physicalism is false, just that it is a philosophical position, not a proposition derived from science, and certainly not a fact that is, or can be, established by the currently available tools of science. It is a faith that some day — some day probably very, very far off into the future — science will demonstrate that our mental processes can be reduced to, and derived from, the laws of physics. Similarly, given the inability to account for observed fluctuations of output and employment in terms of microfoundations, the assertion that only microfounded models are scientific is simply an expression of faith in some, as yet unknown, future discovery, not a claim supported by any available scientific proof or evidence.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,924 other followers

Follow Uneasy Money on WordPress.com