Archive for the 'Robert Lucas' Category

Thompson’s Reformulation of Macroeconomic Theory, Part V: A Neoclassical Black Hole

It’s been over three years since I posted the fourth of my four previous installments in this series about Earl Thompson’s unpublished paper “A Reformulation of Macroeconomic Theory,” Thompson’s strictly neoclassical alternative to the standard Keynesian IS-LM model. Given the long hiatus, a short recapitulation seems in order.

The first installment was an introduction summarizing Thompson’s two main criticisms of the Keynesian model: 1) the disconnect between the standard neoclassical marginal productivity theory of production and factor pricing and the Keynesian assertion that labor receives a wage equal to its marginal product, thereby implying the existence of a second scarce factor of production (capital), but with the market for capital services replaced in the IS-LM model by the Keynesian expenditure functions, creating a potential inconsistency between the IS-LM model and a deep property of neoclassical theory; 2) the market for capital services having been excluded from the IS-LM model, the model lacks a variable that equilibrates the choice between holding money or real assets, so that the Keynesian investment function is incompletely specified, the Keynesian equilibrium condition for spending – equality between savings and investment – taking no account of the incentive for capital accumulation or the relationship, explicitly discussed by Keynes, between current investment and the (expected) future price level. Excluding the dependence of the equilibrium rate of spending on (expected) inflation from the IS-LM model renders the model logically incomplete.

The second installment was a discussion of the Hicksian temporary-equilibrium method used by Thompson to rationalize the existence of involuntary unemployment. For Thompson involuntary unemployment means unemployment caused by overly optimistic expectations by workers of wage offers, leading them to mistakenly set reservation wages too high. The key idea of advantage of the temporary-equilibrium method is that it reconciles the convention of allowing a market-clearing price to equilibrate supply and demand with the phenomenon of substantial involuntary unemployment in business-cycle downturns. Because workers have an incentive to withhold their services in order to engage in further job search or job training or leisure, their actual short-run supply of labor services in a given time period is highly elastic at the expected wage. If wage offers are below expectations, workers (mistakenly = involuntarily) choose unemployment, but given those mistaken expectations, the labor market is cleared with the observed wage equilibrating the demand for labor services and supply of labor services. There are clearly problems with this way of modeling the labor market, but it does provide an analytical technique that can account for cyclical fluctuations in unemployment within a standard microeconomic framework.

In the third installment, I showed how Thompson derived his FF curve, representing combinations of price levels and interest rates consistent with (temporary) equilibrium in both factor markets (labor services and capital services) and two versions of the LM curve, representing price levels and interest rates consistent with equilibrium in the money market. The two versions of the LM curve (analogous, but not identical, to the Keynesian LM curve) correspond to different monetary regimes. In what Thompson called the classical case, the price level is fixed by convertibility of output into cash at a fixed exchange rate, with money being supplied by a competitive banking system paying competitive interest on cash balances. The LM curve in this case is vertical at the fixed price level, with any nominal rate of interest being consistent with equilibrium in the money market, inasmuch as the amount of money demanded depends not on the nominal interest rate, but on the difference between the nominal interest rate and the competitively determined interest rate paid on cash. In the modern case, cash is non-interest bearing and supplied monopolistically by the monetary authority, so the LM curve is upward-sloping, with the cost of holding cash rising with the rate of interest, thereby reducing the amount of money demanded and increasing the price level for a given quantity of money supplied by the monetary authority. The solution of the model corresponds to the intersection of the FF and LM curves. For the classical case, the intersection is unique, but in the modern case since both curves are upward sloping, multiple intersections are possible.

The focus of the fourth installment was on setting up a model analogous to the Keynesian model by replacing the market for capital services excluded by Walras’s Law with something similar to the Keynesian expenditure functions (consumption, investment, government spending, etc.). The key point is that the FF and LM curves implicitly define a corresponding CC curve (shown in Figure 4 of the third installment) with the property that, at all points on the CC curve, the excess demand for (supply of) money exactly equals the excess supply of (demand for) labor. Thus, the CC curve represents a stock equilibrium in the market for commodities (i.e., a single consumption/capital good) rather than a flow rate of expenditure and income as represented by the conventional IS curve. But the inconsistency between the upward-sloping CC curve and the downward sloping IS curve reflects the underlying inconsistency between the neoclassical and the Keynesian paradigms.

In this installment, I am going to work through Thompson’s argument about the potential for an unstable equilibrium in the version of his model with an upward-sloping LM curve corresponding to the case in which non-interest bearing money is monopolistically supplied by a central bank. Thompson makes the argument using Figure 5, a phase diagram showing the potential equilibria for such an economy in terms of the FF curve (representing price levels and nominal interest rates consistent with equilibrium in the markets for labor and capital services) and the CC curve (representing price levels and nominal interest rates consistent with equilibrium in the output market).

Thompson_Figure5A phase diagram shows the direction of price adjustment when the economy is not in equilibrium (one of the two points of intersection between the FF and the CC curves). A disequilibrium implies a price change in response to an excess supply or excess demand in some market. All points above and to the left of the FF curve correspond to an excess supply of capital services, implying a falling nominal interest rate; points below and to the right of the FF curve correspond to excess demand for capital services, implying a rising interest rate. Points above and to the left of the CC curve correspond to an excess demand for output, implying a rising price level; points below and to the right of the CC curve correspond to an excess supply of output, implying a falling price level. Points in between the FF and CC curves correspond either to an excess demand for commodities and for capital services, implying a rising price level and a rising nominal interest rate (in the region between the two points of intersection – Eu and Es — between the CC and FF curves) or to an excess supply of both capital services and commodities, implying a falling interest rate and a falling price level (in the regions below the lower intersection Eu and above the upper intersection Es). The arrows in the diagram indicate the direction in which the price level and the nominal interest rate are changing at any point in the diagram.

Given the direction of price change corresponding to points off the CC and FF curves, the upper intersection is shown to be a stable equilibrium, while the lower intersection is unstable. Moreover, the instability corresponding to the lower intersection is very dangerous, because entering the region between the CC and FF curves below Eu means getting sucked into a vicious downward spiral of prices and interest rates that can only be prevented by a policy intervention to shift the CC curve to the right, either directly by way of increased government spending or tax cuts, or indirectly, through monetary policy aimed at raising the price level and expected inflation, shifting the LM curve, and thereby the CC curve, to the right. It’s like stepping off a cliff into a black hole.

Although I have a lot of reservations about the practical relevance of this model as an analytical tool for understanding cyclical fluctuations and counter-cyclical policy, which I plan to discuss in a future post, the model does resonate with me, and it does so especially after my recent posts about the representative-agent modeling strategy in New Classical economics (here, here, and here). Representative-agent models, I argued, are inherently unable to serve as analytical tools in macroeconomics, because their reductionist approach implies that all relevant decision making can be reduced to the optimization of a single agent, insulating the analysis from any interactions between decision-makers. But it is precisely the interaction effects between decision makers that create analytical problems that constitute the subject matter of the discipline or sub-discipline known as macroeconomics. That Robert Lucas has made it his life’s work to annihilate this field of study is a sad commentary on his contribution, Nobel Prize or no Nobel Prize, as an economic theorist.

That is one reason why I regard Thompson’s model, despite its oversimplifications, as important: it is constructed on a highly aggregated, yet strictly neoclassical, foundation, including continuous market-clearing, arriving at the remarkable conclusion that not only is there an unstable equilibrium, but it is at least possible for an economy in the neighborhood of the unstable equilibrium to be caught in a vicious downward deflationary spiral in which falling prices do not restore equilibrium but, instead, suck the economy into a zero-output black hole. That result seems to me to be a major conceptual breakthrough, showing that the strict rationality assumptions of neoclassical theory can lead to aoutcome that is totally at odds with the usual presumption that the standard neoclassical assumptions inevitably generate a unique stable equilibrium and render macroeconomics superfluous.

The Neoclassical Synthesis and the Mind-Body Problem

The neoclassical synthesis that emerged in the early postwar period aimed at reconciling the macroeconomic (IS-LM) analysis derived from Keynes via Hicks and others with the neoclassical microeconomic analysis of general equilibrium derived from Walras. The macroeconomic analysis was focused on an equilibrium of income and expenditure flows while the Walrasian analysis was focused on the equilibrium between supply and demand in individual markets. The two types of analysis seemed to be incommensurate inasmuch as the conditions for equilibrium in the two analysis did not seem to match up against each other. How does an analysis focused on the equality of aggregate flows of income and expenditure get translated into an analysis focused on the equality of supply and demand in individual markets? The two languages seem to be different, so it is not obvious how a statement formulated in one language gets translated into the other. And even if a translation is possible, does the translation hold under all, or only under some, conditions? And if so, what are those conditions?

The original neoclassical synthesis did not aim to provide a definitive answer to those questions, but it was understood to assert that if the equality of income and expenditure was assured at a level consistent with full employment, one could safely assume that market forces would take care of the allocation of resources, so that markets would be cleared and the conditions of microeconomic general equilibrium satisfied, at least as a first approximation. This version of the neoclassical synthesis was obviously ad hoc and an unsatisfactory resolution of the incommensurability of the two levels of analysis. Don Patinkin sought to provide a rigorous reconciliation of the two levels of analysis in his treatise Money, Interest and Prices. But for all its virtues – and they are numerous – Patinkin’s treatise failed to bridge the gap between the two levels of analysis.

As I mentioned recently in a post on Romer and Lucas, Kenneth Arrow in a 1967 review of Samuelson’s Collected Works commented disparagingly on the neoclassical synthesis of which Samuelson was a leading proponent. The widely shared dissatisfaction expressed by Arrow motivated much of the work that soon followed on the microfoundations of macroeconomics exemplified in the famous 1970 Phelps volume. But the motivation for the search for microfoundations was then (before the rational expectations revolution) to specify the crucial deviations from the assumptions underlying the standard Walrasian general-equilibrium model that would generate actual or seeming price rigidities, which a straightforward – some might say superficial — understanding of neoclassical microeconomic theory suggested were necessary to explain why, after a macro-disturbance, equilibrium was not rapidly restored by price adjustments. Two sorts of explanations emerged from the early microfoundations literature: a) search and matching theories assuming that workers and employers must expend time and resources to find appropriate matches; b) institutional theories of efficiency wages or implicit contracts that explain why employers and workers prefer layoffs to wage cuts in response to negative demand shocks.

Forty years on, the search and matching theories do not seem capable of accounting for the magnitude of observed fluctuations in employment or the cyclical variation in layoffs, and the institutional theories are still difficult to reconcile with the standard neoclassical assumptions, remaining an ad hoc appendage to New Keynesian models that otherwise adhere to the neoclassical paradigm. Thus, although the original neoclassical synthesis in which the Keynesian income-expenditure model was seen as a pre-condition for the validity of the neoclassical model was rejected within a decade of Arrow’s dismissive comment about the neoclassical synthesis, Tom Sargent has observed in a recent review of Robert Lucas’s Collected Papers on Monetary Theory that Lucas has implicitly adopted a new version of the neoclassical synthesis dominated by an intertemporal neoclassical general-equilibrium model, but with the proviso that substantial shocks to aggregate demand and the price level are prevented by monetary policy, thereby making the neoclassical model a reasonable approximation to reality.

Ok, so you are probably asking what does all this have to do with the mind-body problem? A lot, I think in that both the neoclassical synthesis and the mind-body problem involve a disconnect between two kinds – two levels – of explanation. The neoclassical synthesis asserts some sort of connection – but a problematic one — between the explanatory apparatus – macroeconomics — used to understand the cyclical fluctuations of what we are used to think of as the aggregate economy and the explanatory apparatus – microeconomics — used to understand the constituent elements of the aggregate economy — households and firms — and how those elements are related to, and interact with, each other.

The mind-body problem concerns the relationship between the mental – our direct experience of a conscious inner life of thoughts, emotions, memories, decisions, hopes and regrets — and the physical – matter, atoms, neurons. A basic postulate of science is that all phenomena have material causes. So the existence of conscious states that seem to us, by way of our direct experience, to be independent of material causes is also highly problematic. There are a few strategies for handling the problem. One is to assert that the mind truly is independent of the body, which is to say that consciousness is not the result of physical causes. A second is to say that mind is not independent of the body; we just don’t understand the nature of the relationship. There are two possible versions of this strategy: a) that although the nature of the relationship is unknown to us now, advances in neuroscience could reveal to us the way in which consciousness is caused by the operation of the brain; b) although our minds are somehow related to the operation of our brains, the nature of this relationship is beyond the capacity of our minds or brains to comprehend owing to considerations analogous to Godel’s incompleteness theorem (a view espoused by the philosopher Colin McGinn among others); in other words, the mind-body problem is inherently beyond human understanding. And the third strategy is to deny the existence of consciousness, because a conscious state is identical with the physical state of a brain, so that consciousness is just an epiphenomenon of a brain state; we in our naivete may think that our conscious states have a separate existence, but those states are strictly identical with corresponding brain states, so that whatever conscious state that we think we are experiencing has been entirely produced by the physical forces that determine the behavior of our brains and the configuration of its physical constituents.

The first, and probably the last, thing that one needs to understand about the third strategy is that, as explained by Colin McGinn (see e.g., here), its validity has not been demonstrated by neuroscience or by any other branch of science; it is, no less than any of the other strategies, strictly a metaphysical position. The mind-body problem is a problem precisely because science has not even come close to demonstrating how mental states are caused by, let alone that they are identical to, brain states, despite some spurious misinterpretations of research that purport to show such an identity.

Analogous to the scientific principle that all phenomena have material or physical causes, there is in economics and social science a principle called methodological individualism, which roughly states that explanations of social outcomes should be derived from theories about the conduct of individuals, not from theories about abstract social entities that exist independently of their constituent elements. The underlying motivation for methodological individualism (as opposed to political individualism with which it is related but from which it is distinct) was to counter certain ideas popular in the nineteenth and twentieth centuries asserting the existence of metaphysical social entities like “history” that are somehow distinct from yet impinge upon individual human beings, and that there are laws of history or social development from which future states of the world can be predicted, as Hegel, Marx and others tried to do. This notion gave rise to a two famous books by Popper: The Open Society and its Enemies and The Poverty of Historicism. Methodological individualism as articulated by Popper was thus primarily an attack on the attribution of special powers to determine the course of future events to abstract metaphysical or mystical entities like history or society that are supposedly things or beings in themselves distinct from the individual human beings of which they are constituted. Methodological individualism does not deny the existence of collective entities like society; it simply denies that such collective entities exist as objective facts that can be observed as such. Our apprehension of these entities must be built up from more basic elements — individuals and their plans, beliefs and expectations — that we can apprehend directly.

However, methodological individualism is not the same as reductionism; methodological individualism teaches us to look for explanations of higher-level phenomena, e.g., a pattern of social relationships like the business cycle, in terms of the basic constituents forming the pattern: households, business firms, banks, central banks and governments. It does not assert identity between the pattern of relationships and the constituent elements; it says that the pattern can be understood in terms of interactions between the elements. Thus, a methodologically individualistic explanation of the business cycle in terms of the interactions between agents – households, businesses, etc. — would be analogous to an explanation of consciousness in terms of the brain if an explanation of consciousness existed. A methodologically individualistic explanation of the business cycle would not be analogous to an assertion that consciousness exists only as an epiphenomenon of brain states. The assertion that consciousness is nothing but the epiphenomenon of a corresponding brain state is reductionist; it asserts an identity between consciousness and brain states without explaining how consciousness is caused by brain states.

In business-cycle theory, the analogue of such a reductionist assertion of identity between higher-level and lower level phenomena is the assertion that the business cycle is not the product of the interaction of individual agents, but is simply the optimal plan of a representative agent. On this account, the business cycle becomes an epiphenomenon; apparent fluctuations being nothing more than the optimal choices of the representative agent. Of course, everyone knows that the representative agent is merely a convenient modeling device in terms of which a business-cycle theorist tries to account for the observed fluctuations. But that is precisely the point. The whole exercise is a sham; the representative agent is an as-if device that does not ground business-cycle fluctuations in the conduct of individual agents and their interactions, but simply asserts an identity between those interactions and the supposed decisions of the fictitious representative agent. The optimality conditions in terms of which the model is solved completely disregard the interactions between individuals that might cause an unintended pattern of relationships between those individuals. The distinctive feature of methodological individualism is precisely the idea that the interactions between individuals can lead to unintended consequences; it is by way of those unintended consequences that a higher-level pattern might emerge from interactions among individuals. And those individual interactions are exactly what is suppressed by representative-agent models.

So the notion that any analysis premised on a representative agent provides microfoundations for macroeconomic theory seems to be a travesty built on a total misunderstanding of the principle of methodological individualism that it purports to affirm.

Romer v. Lucas

A couple of months ago, Paul Romer created a stir by publishing a paper in the American Economic Review “Mathiness in the Theory of Economic Growth,” an attack on two papers, one by McGrattan and Prescott and the other by Lucas and Moll on aspects of growth theory. He accused the authors of those papers of using mathematical modeling as a cover behind which to hide assumptions guaranteeing results by which the authors could promote their research agendas. In subsequent blog posts, Romer has sharpened his attack, focusing it more directly on Lucas, whom he accuses of a non-scientific attachment to ideological predispositions that have led him to violate what he calls Feynman integrity, a concept eloquently described by Feynman himself in a 1974 commencement address at Caltech.

It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked–to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can–if you know anything at all wrong, or possibly wrong–to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

Romer contrasts this admirable statement of what scientific integrity means with another by George Stigler, seemingly justifying, or at least excusing, a kind of special pleading on behalf of one’s own theory. And the institutional and perhaps ideological association between Stigler and Lucas seems to suggest that Lucas is inclined to follow the permissive and flexible Stiglerian ethic rather than rigorous Feynman standard of scientific integrity. Romer regards this as a breach of the scientific method and a step backward for economics as a science.

I am not going to comment on the specific infraction that Romer accuses Lucas of having committed; I am not familiar with the mathematical question in dispute. Certainly if Lucas was aware that his argument in the paper Romer criticizes depended on the particular mathematical assumption in question, Lucas should have acknowledged that to be the case. And even if, as Lucas asserted in responding to a direct question by Romer, he could have derived the result in a more roundabout way, then he should have pointed that out, too. However, I don’t regard the infraction alleged by Romer to be more than a misdemeanor, hardly a scandalous breach of the scientific method.

Why did Lucas, who as far as I can tell was originally guided by Feynman integrity, switch to the mode of Stigler conviction? Market clearing did not have to evolve from auxiliary hypothesis to dogma that could not be questioned.

My conjecture is economists let small accidents of intellectual history matter too much. If we had behaved like scientists, things could have turned out very differently. It is worth paying attention to these accidents because doing so might let us take more control over the process of scientific inquiry that we are engaged in. At the very least, we should try to reduce the odds that that personal frictions and simple misunderstandings could once again cause us to veer off on some damaging trajectory.

I suspect that it was personal friction and a misunderstanding that encouraged a turn toward isolation (or if you prefer, epistemic closure) by Lucas and colleagues. They circled the wagons because they thought that this was the only way to keep the rational expectations revolution alive. The misunderstanding is that Lucas and his colleagues interpreted the hostile reaction they received from such economists as Robert Solow to mean that they were facing implacable, unreasoning resistance from such departments as MIT. In fact, in a remarkably short period of time, rational expectations completely conquered the PhD program at MIT.

More recently Romer, having done graduate work both at MIT and Chicago in the late 1970s, has elaborated on the personal friction between Solow and Lucas and how that friction may have affected Lucas, causing him to disengage from the professional mainstream. Paul Krugman, who was at MIT when this nastiness was happening, is skeptical of Romer’s interpretation.

My own view is that being personally and emotionally attached to one’s own theories, whether for religious or ideological or other non-scientific reasons, is not necessarily a bad thing as long as there are social mechanisms allowing scientists with different scientific viewpoints an opportunity to make themselves heard. If there are such mechanisms, the need for Feynman integrity is minimized, because individual lapses of integrity will be exposed and remedied by criticism from other scientists; scientific progress is possible even if scientists don’t live up to the Feynman standards, and maintain their faith in their theories despite contradictory evidence. But, as I am going to suggest below, there are reasons to doubt that social mechanisms have been operating to discipline – not suppress, just discipline – dubious economic theorizing.

My favorite example of the importance of personal belief in, and commitment to the truth of, one’s own theories is Galileo. As discussed by T. S. Kuhn in The Structure of Scientific Revolutions. Galileo was arguing for a paradigm change in how to think about the universe, despite being confronted by empirical evidence that appeared to refute the Copernican worldview he believed in: the observations that the sun revolves around the earth, and that the earth, as we directly perceive it, is, apart from the occasional earthquake, totally stationary — good old terra firma. Despite that apparently contradictory evidence, Galileo had an alternative vision of the universe in which the obvious movement of the sun in the heavens was explained by the spinning of the earth on its axis, and the stationarity of the earth by the assumption that all our surroundings move along with the earth, rendering its motion imperceptible, our perception of motion being relative to a specific frame of reference.

At bottom, this was an almost metaphysical world view not directly refutable by any simple empirical test. But Galileo adopted this worldview or paradigm, because he deeply believed it to be true, and was therefore willing to defend it at great personal cost, refusing to recant his Copernican view when he could have easily appeased the Church by describing the Copernican theory as just a tool for predicting planetary motion rather than an actual representation of reality. Early empirical tests did not support heliocentrism over geocentrism, but Galileo had faith that theoretical advancements and improved measurements would eventually vindicate the Copernican theory. He was right of course, but strict empiricism would have led to a premature rejection of heliocentrism. Without a deep personal commitment to the Copernican worldview, Galileo might not have articulated the case for heliocentrism as persuasively as he did, and acceptance of heliocentrism might have been delayed for a long time.

Imre Lakatos called such deeply-held views underlying a scientific theory the hard core of the theory (aka scientific research program), a set of beliefs that are maintained despite apparent empirical refutation. The response to any empirical refutation is not to abandon or change the hard core but to adjust what Lakatos called the protective belt of the theory. Eventually, as refutations or empirical anomalies accumulate, the research program may undergo a crisis, leading to its abandonment, or it may simply degenerate if it fails to solve new problems or discover any new empirical facts or regularities. So Romer’s criticism of Lucas’s dogmatic attachment to market clearing – Lucas frequently makes use of ad hoc price stickiness assumptions; I don’t know why Romer identifies market-clearing as a Lucasian dogma — may be no more justified from a history of science perspective than would criticism of Galileo’s dogmatic attachment to heliocentrism.

So while I have many problems with Lucas, lack of Feynman integrity is not really one of them, certainly not in the top ten. What I find more disturbing is his narrow conception of what economics is. As he himself wrote in an autobiographical sketch for Lives of the Laureates, he was bewitched by the beauty and power of Samuelson’s Foundations of Economic Analysis when he read it the summer before starting his training as a graduate student at Chicago in 1960. Although it did not have the transformative effect on me that it had on Lucas, I greatly admire the Foundations, but regardless of whether Samuelson himself meant to suggest such an idea (which I doubt), it is absurd to draw this conclusion from it:

I loved the Foundations. Like so many others in my cohort, I internalized its view that if I couldn’t formulate a problem in economic theory mathematically, I didn’t know what I was doing. I came to the position that mathematical analysis is not one of many ways of doing economic theory: It is the only way. Economic theory is mathematical analysis. Everything else is just pictures and talk.

Oh, come on. Would anyone ever think that unless you can formulate the problem of whether the earth revolves around the sun or the sun around the earth mathematically, you don’t know what you are doing? And, yet, remarkably, on the page following that silly assertion, one finds a totally brilliant description of what it was like to take graduate price theory from Milton Friedman.

Friedman rarely lectured. His class discussions were often structured as debates, with student opinions or newspaper quotes serving to introduce a problem and some loosely stated opinions about it. Then Friedman would lead us into a clear statement of the problem, considering alternative formulations as thoroughly as anyone in the class wanted to. Once formulated, the problem was quickly analyzed—usually diagrammatically—on the board. So we learned how to formulate a model, to think about and decide which features of a problem we could safely abstract from and which he needed to put at the center of the analysis. Here “model” is my term: It was not a term that Friedman liked or used. I think that for him talking about modeling would have detracted from the substantive seriousness of the inquiry we were engaged in, would divert us away from the attempt to discover “what can be done” into a merely mathematical exercise. [my emphasis].

Despite his respect for Friedman, it’s clear that Lucas did not adopt and internalize Friedman’s approach to economic problem solving, but instead internalized the caricature he extracted from Samuelson’s Foundations: that mathematical analysis is the only legitimate way of doing economic theory, and that, in particular, the essence of macroeconomics consists in a combination of axiomatic formalism and philosophical reductionism (microfoundationalism). For Lucas, the only scientifically legitimate macroeconomic models are those that can be deduced from the axiomatized Arrow-Debreu-McKenzie general equilibrium model, with solutions that can be computed and simulated in such a way that the simulations can be matched up against the available macroeconomics time series on output, investment and consumption.

This was both bad methodology and bad science, restricting the formulation of economic problems to those for which mathematical techniques are available to be deployed in finding solutions. On the one hand, the rational-expectations assumption made finding solutions to certain intertemporal models tractable; on the other, the assumption was justified as being required by the rationality assumptions of neoclassical price theory.

In a recent review of Lucas’s Collected Papers on Monetary Theory, Thomas Sargent makes a fascinating reference to Kenneth Arrow’s 1967 review of the first two volumes of Paul Samuelson’s Collected Works in which Arrow referred to the problematic nature of the neoclassical synthesis of which Samuelson was a chief exponent.

Samuelson has not addressed himself to one of the major scandals of current price theory, the relation between microeconomics and macroeconomics. Neoclassical microeconomic equilibrium with fully flexible prices presents a beautiful picture of the mutual articulations of a complex structure, full employment being one of its major elements. What is the relation between this world and either the real world with its recurrent tendencies to unemployment of labor, and indeed of capital goods, or the Keynesian world of underemployment equilibrium? The most explicit statement of Samuelson’s position that I can find is the following: “Neoclassical analysis permits of fully stable underemployment equilibrium only on the assumption of either friction or a peculiar concatenation of wealth-liquidity-interest elasticities. . . . [The neoclassical analysis] goes far beyond the primitive notion that, by definition of a Walrasian system, equilibrium must be at full employment.” . . .

In view of the Phillips curve concept in which Samuelson has elsewhere shown such interest, I take the second sentence in the above quotation to mean that wages are stationary whenever unemployment is X percent, with X positive; thus stationary unemployment is possible. In general, one can have a neoclassical model modified by some elements of price rigidity which will yield Keynesian-type implications. But such a model has yet to be constructed in full detail, and the question of why certain prices remain rigid becomes of first importance. . . . Certainly, as Keynes emphasized the rigidity of prices has something to do with the properties of money; and the integration of the demand and supply of money with general competitive equilibrium theory remains incomplete despite attempts beginning with Walras himself.

If the neoclassical model with full price flexibility were sufficiently unrealistic that stable unemployment equilibrium be possible, then in all likelihood the bulk of the theorems derived by Samuelson, myself, and everyone else from the neoclassical assumptions are also contrafactual. The problem is not resolved by what Samuelson has called “the neoclassical synthesis,” in which it is held that the achievement of full employment requires Keynesian intervention but that neoclassical theory is valid when full employment is reached. . . .

Obviously, I believe firmly that the mutual adjustment of prices and quantities represented by the neoclassical model is an important aspect of economic reality worthy of the serious analysis that has been bestowed on it; and certain dramatic historical episodes – most recently the reconversion of the United States from World War II and the postwar European recovery – suggest that an economic mechanism exists which is capable of adaptation to radical shifts in demand and supply conditions. On the other hand, the Great Depression and the problems of developing countries remind us dramatically that something beyond, but including, neoclassical theory is needed.

Perhaps in a future post, I may discuss this passage, including a few sentences that I have omitted here, in greater detail. For now I will just say that Arrow’s reference to a “neoclassical microeconomic equilibrium with fully flexible prices” seems very strange inasmuch as price flexibility has absolutely no role in the proofs of the existence of a competitive general equilibrium for which Arrow and Debreu and McKenzie are justly famous. All the theorems Arrow et al. proved about the neoclassical equilibrium were related to existence, uniqueness and optimaiity of an equilibrium supported by an equilibrium set of prices. Price flexibility was not involved in those theorems, because the theorems had nothing to do with how prices adjust in response to a disequilibrium situation. What makes this juxtaposition of neoclassical microeconomic equilibrium with fully flexible prices even more remarkable is that about eight years earlier Arrow wrote a paper (“Toward a Theory of Price Adjustment”) whose main concern was the lack of any theory of price adjustment in competitive equilibrium, about which I will have more to say below.

Sargent also quotes from two lectures in which Lucas referred to Don Patinkin’s treatise Money, Interest and Prices which provided perhaps the definitive statement of the neoclassical synthesis Samuelson espoused. In one lecture (“My Keynesian Education” presented to the History of Economics Society in 2003) Lucas explains why he thinks Patinkin’s book did not succeed in its goal of integrating value theory and monetary theory:

I think Patinkin was absolutely right to try and use general equilibrium theory to think about macroeconomic problems. Patinkin and I are both Walrasians, whatever that means. I don’t see how anybody can not be. It’s pure hindsight, but now I think that Patinkin’s problem was that he was a student of Lange’s, and Lange’s version of the Walrasian model was already archaic by the end of the 1950s. Arrow and Debreu and McKenzie had redone the whole theory in a clearer, more rigorous, and more flexible way. Patinkin’s book was a reworking of his Chicago thesis from the middle 1940s and had not benefited from this more recent work.

In the other lecture, his 2003 Presidential address to the American Economic Association, Lucas commented further on why Patinkin fell short in his quest to unify monetary and value theory:

When Don Patinkin gave his Money, Interest, and Prices the subtitle “An Integration of Monetary and Value Theory,” value theory meant, to him, a purely static theory of general equilibrium. Fluctuations in production and employment, due to monetary disturbances or to shocks of any other kind, were viewed as inducing disequilibrium adjustments, unrelated to anyone’s purposeful behavior, modeled with vast numbers of free parameters. For us, today, value theory refers to models of dynamic economies subject to unpredictable shocks, populated by agents who are good at processing information and making choices over time. The macroeconomic research I have discussed today makes essential use of value theory in this modern sense: formulating explicit models, computing solutions, comparing their behavior quantitatively to observed time series and other data sets. As a result, we are able to form a much sharper quantitative view of the potential of changes in policy to improve peoples’ lives than was possible a generation ago.

So, as Sargent observes, Lucas recreated an updated neoclassical synthesis of his own based on the intertemporal Arrow-Debreu-McKenzie version of the Walrasian model, augmented by a rationale for the holding of money and perhaps some form of monetary policy, via the assumption of credit-market frictions and sticky prices. Despite the repudiation of the updated neoclassical synthesis by his friend Edward Prescott, for whom monetary policy is irrelevant, Lucas clings to neoclassical synthesis 2.0. Sargent quotes this passage from Lucas’s 1994 retrospective review of A Monetary History of the US by Friedman and Schwartz to show how tightly Lucas clings to neoclassical synthesis 2.0 :

In Kydland and Prescott’s original model, and in many (though not all) of its descendants, the equilibrium allocation coincides with the optimal allocation: Fluctuations generated by the model represent an efficient response to unavoidable shocks to productivity. One may thus think of the model not as a positive theory suited to all historical time periods but as a normative benchmark providing a good approximation to events when monetary policy is conducted well and a bad approximation when it is not. Viewed in this way, the theory’s relative success in accounting for postwar experience can be interpreted as evidence that postwar monetary policy has resulted in near-efficient behavior, not as evidence that money doesn’t matter.

Indeed, the discipline of real business cycle theory has made it more difficult to defend real alternaltives to a monetary account of the 1930s than it was 30 years ago. It would be a term-paper-size exercise, for example, to work out the possible effects of the 1930 Smoot-Hawley Tariff in a suitably adapted real business cycle model. By now, we have accumulated enough quantitative experience with such models to be sure that the aggregate effects of such a policy (in an economy with a 5% foreign trade sector before the Act and perhaps a percentage point less after) would be trivial.

Nevertheless, in the absence of some catastrophic error in monetary policy, Lucas evidently believes that the key features of the Arrow-Debreu-McKenzie model are closely approximated in the real world. That may well be true. But if it is, Lucas has no real theory to explain why.

In his 1959 paper (“Towards a Theory of Price Adjustment”) I just mentioned, Arrow noted that the theory of competitive equilibrium has no explanation of how equilibrium prices are actually set. Indeed, the idea of competitive price adjustment is beset by a paradox: all agents in a general equilibrium being assumed to be price takers, how is it that a new equilibrium price is ever arrived at following any disturbance to an initial equilibrium? Arrow had no answer to the question, but offered the suggestion that, out of equilibrium, agents are not price takers, but price searchers, possessing some measure of market power to set price in the transition between the old and new equilibrium. But the upshot of Arrow’s discussion was that the problem and the paradox awaited solution. Almost sixty years on, some of us are still waiting, but for Lucas and the Lucasians, there is neither problem nor paradox, because the actual price is the equilibrium price, and the equilibrium price is always the (rationally) expected price.

If the social functions of science were being efficiently discharged, this rather obvious replacement of problem solving by question begging would not have escaped effective challenge and opposition. But Lucas was able to provide cover for this substitution by persuading the profession to embrace his microfoundational methodology, while offering irresistible opportunities for professional advancement to younger economists who could master the new analytical techniques that Lucas and others were rapidly introducing, thereby neutralizing or coopting many of the natural opponents to what became modern macroeconomics. So while Romer considers the conquest of MIT by the rational-expectations revolution, despite the opposition of Robert Solow, to be evidence for the advance of economic science, I regard it as a sign of the social failure of science to discipline a regressive development driven by the elevation of technique over substance.

Richard Lipsey and the Phillips Curve

Richard Lipsey has had an extraordinarily long and productive career as both an economic theorist and an empirical economist, making numerous important contributions in almost all branches of economics. (See, for example, the citation about Lipsey as a fellow of the Canadian Economics Association.) In addition, his many textbooks have been enormously influential in advocating that economists should strive to make their discipline empirically relevant by actually subjecting their theories to meaningful empirical tests in which refutation is a realistic possibility not just a sign that the researcher was insufficiently creative in theorizing or in performing the data analysis.

One of Lipsey’s most important early contributions was his 1960 paper on the Phillips Curve “The Relationship between Unemployment and the Rate of Change of Money Wages in the United Kingdom 1862-1957: A Further Analysis” in which he extended W A. Phillips’s original results, and he has continued to write about the Phillips Curve ever since. Lipsey, in line with his empiricist philosophical position, has consistently argued that a well-supported empirical relationship should not be dismissed simply because of a purely theoretical argument about how expectations are formed. In other words, the argument that adjustments in inflation expectations would cause the short-run Phillips curve relation captured by empirical estimates of the relationship between inflation and unemployment may well be valid (as was actually recognized early on by Samuelson and Solow in their famous paper suggesting that the Phillips Curve could be interpreted as a menu of alternative combinations of inflation and unemployment from which policy-makers could choose) in some general qualitative sense. But that does not mean that it had to be accepted as an undisputable axiom of economics that the long-run relationship between unemployment and inflation is necessarily vertical, as Friedman and Phelps and Lucas convinced most of the economics profession in the late 1960s and early 1970s.

A few months ago, Lipsey was kind enough to send me a draft of the paper that he presented at the annual meeting of the History of Economics Society; the paper is called “The Phillips Curve and the Tyranny of an Assumed Unique Macro Equilibrium.” Here is the abstract of the paper.

To make the argument that the behaviour of modern industrial economies since the 1990s is inconsistent with theories in which there is a unique ergodic macro equilibrium, the paper starts by reviewing both the early Keynesian theory in which there was no unique level of income to which the economy was inevitably drawn and the debate about the amount of demand pressure at which it was best of maintain the economy: high aggregate demand and some inflationary pressure or lower aggregate demand and a stable price level. It then covers the rise of the simple Phillips curve and its expectations-augmented version, which introduced into current macro theory a natural rate of unemployment (and its associated equilibrium level of national income). This rate was also a NAIRU, the only rate consistent with stable inflation. It is then argued that the current behaviour of many modern economies in which there is a credible policy to maintain a low and steady inflation rate is inconsistent with the existence of either a unique natural rate or a NAIRU but is consistent with evolutionary theory in which there is perpetual change driven by endogenous technological advance. Instead of a NAIRU evolutionary economies have a non-inflationary band of unemployment (a NAIBU) indicating a range of unemployment and income over with the inflation rate is stable. The paper concludes with the observation that the great pre-Phillips curve debates of the 1950s that assumed that there was a range within which the economy could be run with varying pressures of demand, and varying amounts of unemployment and inflationary pressure, were not as silly as they were made to seem when both Keynesian and New Classical economists accepted the assumption of a perfectly inelastic, long-run Phillips curve located at the unique equilibrium level of unemployment.

Back in January, I wrote a post about the Lucas Critique in which I pointed out that his “proof” that the Phillips Curve is vertical in his celebrated paper on econometric policy evaluation was no proof at all, but simply a very special example in which the only disequilibrium permitted in the model – a misperception of the future price level – would lead an econometrician to estimate a negatively sloped relation between inflation and employment even though under correct expectations of inflation the relationship would be vertical. Allowing for a wider range of behavioral responses, I suggested, might well change the relation between inflation and output even under correctly expected inflation. In his new paper, Lipsey correctly points out that Friedman and Phelps and Lucas, and subsequent New Classical and New Keynesian theoreticians, who have embraced the vertical Phillips Curve doctrine as an article of faith, are also assuming, based on essentially no evidence, that there is a unique macro equilibrium. But, there is very strong evidence to suggest that, in fact, any deviation from an initial equilibrium (or equilibrium time path) is likely to cause changes that, in and of themselves, cause a change in conditions that will propel the system toward a new and different equilibrium time path, rather than return to the time path the system had been moving along before it was disturbed. See my post of almost a year ago about a paper, “Does history matter?: Empirical analysis of evolutionary versus stationary equilibrium views of the economy,” by Carlaw and Lipsey.)

Lipsey concludes his paper with a quotation from his article “The Phillips Curve” published in the volume Famous Figures and Diagrams in Economics edited by Mark Blaug and Peter Lloyd.

Perhaps [then] Keynesians were too hasty in following the New Classical economists in accepting the view that follows from static [and all EWD] models that stable rates of wage and price inflation are poised on the razor’s edge of a unique NAIRU and its accompanying Y*. The alternative does not require a long term Phillips curve trade off, nor does it deny the possibility of accelerating inflations of the kind that have bedevilled many third world countries. It is merely states that industrialised economies with low expected inflation rates may be less precisely responsive than current theory assumes because they are subject to many lags and inertias, and are operating in an ever-changing and uncertain world of endogenous technological change, which has no unique long term static equilibrium. If so, the economy may not be similar to the smoothly functioning mechanical world of Newtonian mechanics but rather to the imperfectly evolving world of evolutionary biology. The Phillips relation then changes from being a precise curve to being a band within which various combinations of inflation and unemployment are possible but outside of which inflation tends to accelerate or decelerate. Perhaps then the great [pre-Phillips curve] debates of the 1940s and early 1950s that assumed that there was a range within which the economy could be run with varying pressures of demand, and varying amounts of unemployment and inflation[ary pressure], were not as silly as they were made to seem when both Keynesian and New Classical economists accepted the assumption of a perfectly inelastic, one-dimensional, long run Phillips curve located at a unique equilibrium Y* and NAIRU.”

Scott Sumner, Meet Robert Lucas

I just saw Scott Sumner’s latest post. It’s about the zero fiscal multiplier. Scott makes a good and important point, which is that, under almost any conditions, fiscal policy cannot be effective if monetary policy is aiming at a policy objective that is inconsistent with that fiscal policy. Here’s how Scott puts it in his typical understated fashion.

From today’s news:

The marked improvement in the labor market since the U.S. central bank began its third round of quantitative easing, or QE3, has added an edge to calls by some policy hawks to dial down the stimulus. The roughly 50 percent jump in monthly job creation since the program began has even won renewed support from centrists, raising at least some chance the Fed could ratchet back its buying as early as next month.

I hope I don’t have to do any more of these.  The fiscal multiplier theory is as dead as John Cleese’s parrot.  The growth in jobs didn’t slow with fiscal austerity, it sped up!  And the Fed is saying that any job improvement due to fiscal stimulus will be offset with tighter money.  They talk like the multiplier is zero, and their actions produce a zero multiplier.

This is classic Sumner, and he deserves credit for rediscovering an argument that Ralph Hawtrey made in 1925, but was ignored and then forgotten until Sumner figured it out for himself. When I went through Hawtrey’s analysis in my recent series of posts on Hawtrey and Keynes, Scott immediately identified the identity between what Hawtrey was saying and what he was saying. So up to this point, I am with Scott all the way. But then he loses me, by asking the following question

Has there ever been a more decisive refutation of a major economic theory?

What’s wrong with that question? Well, it seems to me to fly in the face of another critique by another famous economist whom, I think, Scott actually knows: Robert Lucas. Almost 40 years ago, Lucas published a paper about the Phillips Curve in which he argued that the existence of an empirical relationship between inflation and unemployment, even if empirically well-founded, was not a relationship that policy makers could use as a basis for their policy decisions, because the expectations (of low inflation or stable prices) under which the negative relationship between inflation and unemployment was observed would break down once policy makers used that relationship to try to reduce unemployment by increasing inflation. That simple point, dressed up with just enough mathematical notation to obscure its obviousness, helped Lucas win the Noble Prize, and before long became widely known as the Lucas Critique.

The crux of the Lucas Critique is that economic theory posits deep structural relationships governing economic activity. These structural relationships are necessarily sensitive to the expectations of decision makers, so that no observed empirical relationship between economic variables is invariant to the expectational effects of the policy rules governing policy decisions. Observed relationships between economic variables are useless for policy makers unless they understand those deep structural relationships and how they are affected by expectations.

But now Scott seems to be turning the Lucas Critique on its head by saying that the expectations that result from a particular policy regime — a policy regime that has been subjected to withering criticism by none other than Scott himself – refutes a structural theory (that government spending can increase aggregate spending and income) of how the economy works. I don’t think so. The fact that the Fed has adopted and tenaciously sticks to a perverse reaction function cannot refute a theory in which the Fed’s reaction function is a matter of choice not necessity.

I agree with Scott that monetary policy is usually the best tool for macroeconomic stabilization. But that doesn’t mean that fiscal policy can never ever promote recovery. Even Ralph Hawtrey, originator of the “Treasury view” that fiscal policy is powerless to affect aggregate spending, acknowledged that, in a credit deadlock, when expectations are so pessimistic that the monetary authority is powerless to increase private spending, deficit spending by the government financed by money creation might be the only way to increase aggregate spending. That, to be sure, is a pathological situation. But, with at least some real interest rates, currently below zero, it is not impossible to suppose that we are, or have been, in something like a Hawtreyan credit deadlock. I don’t say that we are in one, just that it’s possible that we are close enough to being there that we can’t confidently exclude the possibility, if only the Fed would listen to Scott and stop targeting 2% inflation, of a positive fiscal multiplier.

With US NGDP not even increasing at a 4% annual rate, and the US economy far below its pre-2008 trendline of 5% annual NGDP growth, I don’t understand why one wouldn’t welcome the aid of fiscal policy in getting NDGP to increase at a faster rate than it has for the last 5 years. Sure the economy has been expanding despite a sharp turn toward contractionary fiscal policy two years ago. If fiscal stimulus had not been withdrawn so rapidly, can we be sure that the economy would not have grown faster? Under conditions such as these, as Hawtrey himself well understood, the prudent course of action is to err on the side of recklessness.

The Lucas Critique Revisited

After writing my previous post, I reread Robert Lucas’s classic article “Econometric Policy Evaluation: A Critique,” surely one of the most influential economics articles of the last half century. While the main point of the article was not entirely original, as Lucas himself acknowledged in the article, so powerful was his explanation of the point that it soon came to be known simply as the Lucas Critique. The Lucas Critique says that if a certain relationship between two economic variables has been estimated econometrically, policy makers, in formulating a policy for the future, cannot rely on that relationship to persist once a policy aiming to exploit the relationship is adopted. The motivation for the Lucas Critique was the Friedman-Phelps argument that a policy of inflation would fail to reduce the unemployment rate in the long run, because workers would eventually adjust their expectations of inflation, thereby draining inflation of any stimulative effect. By restating the Friedman-Phelps argument as the application of a more general principle, Lucas reinforced and solidified the natural-rate hypothesis, thereby establishing a key principle of modern macroeconomics.

In my previous post I argued that microeconomic relationships, e.g., demand curves and marginal rates of substitution, are, as a matter of pure theory, not independent of the state of the macroeconomy. In an interdependent economy all variables are mutually determined, so there is no warrant for saying that microrelationships are logically prior to, or even independent of, macrorelationships. If so, then the idea of microfoundations for macroeconomics is misleading, because all economic relationships are mutually interdependent; some relationships are not more basic or more fundamental than others. The kernel of truth in the idea of microfoundations is that there are certain basic principles or axioms of behavior that we don’t think an economic model should contradict, e.g., arbitrage opportunities should not be left unexploited – people should not pass up obvious opportunities, such as mutually beneficial offers of exchange, to increase their wealth or otherwise improve their state of well-being.

So I was curious to how see whether Lucas, while addressing the issue of how price expectations affected output and employment, recognized the possibility that a microeconomic relationship could be dependent on the state of the macroeconomy. For my purposes, the relevant passage occurs in section 5.3 (subtitled “Phillips Curves”) of the paper. After working out the basic theory earlier in the page, Lucas, in section 5, provided three examples of how econometric estimates of macroeconomic relationships would mislead policy makers if the effect of expectations on those relationships were not taken into account. The first two subsections treated consumption expenditures and the investment tax credit. The passage that I want to focus on consists of the first two paragraphs of subsection 5.3 (which I now quote verbatim except for minor changes in Lucas’s notation).

A third example is suggested by the recent controversy over the Phelps-Friedman hypothesis that permanent changes in the inflation rate will not alter the average rate of unemployment. Most of the major econometric models have been used in simulation experiments to test this proposition; the results are uniformly negative. Since expectations are involved in an essential way in labor and product market supply behavior, one would presumed, on the basis of the considerations raised in section 4, that these tests are beside the point. This presumption is correct, as the following example illustrates.

It will be helpful to utilize a simple, parametric model which captures the main features of the expectational view of aggregate supply – rational agents, cleared markets, incomplete information. We imagine suppliers of goods to be distributed over N distinct markets i, I = 1, . . ., N. To avoid index number problems, suppose that the same (except for location) good is traded in each market, and let y_it be the log of quantity supplied in market i in period t. Assume, further, that the supply y_it is composed of two factors

y_it = Py_it + Cy_it,

where Py_it denotes normal or permanent supply, and Cy_it cyclical or transitory supply (both again in logs). We take Py_it to be unresponsive to all but permanent relative price changes or, since the latter have been defined away by assuming a single good, simply unresponsive to price changes. Transitory supply Cy_it varies with perceived changes in the relative price of goods in i:

Cy_it = β(p_it – Ep_it),

where p_it is the log of the actual price in i at time t, and Ep_it is the log of the general (geometric average) price level in the economy as a whole, as perceived in market i.

Let’s take a moment to ponder the meaning of Lucas’s simplifying assumption that there is just one good. Relative prices (except for spatial differences in an otherwise identical good) are fixed by assumption; a disequilibrium (or suboptimal outcome) can arise only because of misperceptions of the aggregate price level. So, by explicit assumption, Lucas rules out the possibility that any microeconomic relationship depends on macroeconomic conditions. Note also that Lucas does not provide an account of the process by which market prices are established at each location, nothing being said about demand conditions. For example, if suppliers at location i perceive a price (transitorily) above the equilibrium price, and respond by (mistakenly) increasing output, thereby increasing their earnings, do those suppliers increase their demand to consume output? Suppose suppliers live and purchase at locations other than where they are supplying product, so that a supplier at location i purchases at location j, where i does not equal j. If a supplier at location i perceives an increase in price at location i, will his demand to purchase the good at location j increase as well? Will the increase in demand at location j cause an increase in the price at location j? What if there is a one-period lag between supplier receipts and their consumption demands? Lucas provides no insight into these possible ambiguities in his model.

Stated more generally, the problem with Lucas’s example is that it seems to be designed to exclude a priori the possibility of every type of disequilibrium but one, a disequilibrium corresponding to a single type of informational imperfection. Reasoning on the basis of that narrow premise, Lucas shows that, under a given expectation of the future price level, an econometrician would find a positive correlation between the price level and output — a negatively sloped Phillips Curve. Yet, under the same assumptions, Lucas also shows that an anticipated policy to raise the rate of inflation would fail to raise output (or, by implication, increase employment). But, given his very narrow underlying assumptions, it seems plausible to doubt the robustness of Lucas’s conclusion. Proving the validity of a proposition requires more than constructing an example in which the proposition is shown to be valid. That would be like trying to prove that the sides of every triangle are equal in length by constructing a triangle whose angles are all equal to 60 degrees, and then claiming that, because the sides of that triangle are equal in length, the sides of all triangles are equal in length.

Perhaps a better model than the one Lucas posited would have been one in which the amount supplied in each market was positively correlated with the amount supplied in every other market, inasmuch as an increase (decrease) in the amount supplied in one market will tend to increase (decrease) demand in other markets. In that case, I conjecture, deviations from permanent supply would tend to be cumulative (though not necessarily permanent), implying a more complex propagation mechanism than Lucas’s simple model does. Nor is it obvious to me how the equilibrium of such a model would compare to the equilibrium in the Lucas model. It does not seem inconceivable that a model could be constructed in which equilibrium output depended on the average price level. But this is just conjecture on my part, because I haven’t tried to write out and solve such a model. Perhaps an interested reader out there will try to work it out and report back to us on the results.

PS:  Congratulations to Scott Sumner on his excellent op-ed on nominal GDP level targeting in today’s Financial Times.

The State We’re In

Last week, Paul Krugman, set off by this blog post, complained about the current state macroeconomics. Apparently, Krugman feels that if saltwater economists like himself were willing to accommodate the intertemporal-maximization paradigm developed by the freshwater economists, the freshwater economists ought to have reciprocated by acknowledging some role for countercyclical policy. Seeing little evidence of accommodation on the part of the freshwater economists, Krugman, evidently feeling betrayed, came to this rather harsh conclusion:

The state of macro is, in fact, rotten, and will remain so until the cult that has taken over half the field is somehow dislodged.

Besides engaging in a pretty personal attack on his fellow economists, Krugman did not present a very flattering picture of economics as a scientific discipline. What Krugman describes seems less like a search for truth than a cynical bargaining game, in which Krugman feels that his (saltwater) side, after making good faith offers of cooperation and accommodation that were seemingly accepted by the other (freshwater) side, was somehow misled into making concessions that undermined his side’s strategic position. What I found interesting was that Krugman seemed unaware that his account of the interaction between saltwater and freshwater economists was not much more flattering to the former than the latter.

Krugman’s diatribe gave Stephen Williamson an opportunity to scorn and scold Krugman for a crass misunderstanding of the progress of science. According to Williamson, modern macroeconomics has passed by out-of-touch old-timers like Krugman. Among modern macroeconomists, Williamson observes, the freshwater-saltwater distinction is no longer meaningful or relevant. Everyone is now, more or less, on the same page; differences are worked out collegially in seminars, workshops, conferences and in the top academic journals without the rancor and disrespect in which Krugman indulges himself. If you are lucky (and hard-working) enough to be part of it, macroeconomics is a great place to be. One can almost visualize the condescension and the pity oozing from Williamson’s pores for those not part of the charmed circle.

Commenting on this exchange, Noah Smith generally agreed with Williamson that modern macroeconomics is not a discipline divided against itself; the intetermporal maximizers are clearly dominant. But Noah allows himself to wonder whether this is really any cause for celebration – celebration, at any rate, by those not in the charmed circle.

So macro has not yet discovered what causes recessions, nor come anywhere close to reaching a consensus on how (or even if) we should fight them. . . .

Given this state of affairs, can we conclude that the state of macro is good? Is a field successful as long as its members aren’t divided into warring camps? Or should we require a science to give us actual answers? And if we conclude that a science isn’t giving us actual answers, what do we, the people outside the field, do? Do we demand that the people currently working in the field start producing results pronto, threatening to replace them with people who are currently relegated to the fringe? Do we keep supporting the field with money and acclaim, in the hope that we’re currently only in an interim stage, and that real answers will emerge soon enough? Do we simply conclude that the field isn’t as fruitful an area of inquiry as we thought, and quietly defund it?

All of this seems to me to be a side issue. Who cares if macroeconomists like each other or hate each other? Whether they get along or not, whether they treat each other nicely or not, is really of no great import. For example, it was largely at Milton Friedman’s urging that Harry Johnson was hired to be the resident Keynesian at Chicago. But almost as soon as Johnson arrived, he and Friedman were getting into rather unpleasant personal exchanges and arguments. And even though Johnson underwent a metamorphosis from mildly left-wing Keynesianism to moderately conservative monetarism during his nearly two decades at Chicago, his personal and professional relationship with Friedman got progressively worse. And all of that nastiness was happening while both Friedman and Johnson were becoming dominant figures in the economics profession. So what does the level of collegiality and absence of personal discord have to do with the state of a scientific or academic discipline? Not all that much, I would venture to say.

So when Scott Sumner says:

while Krugman might seem pessimistic about the state of macro, he’s a Pollyanna compared to me. I see the field of macro as being completely adrift

I agree totally. But I diagnose the problem with macro a bit differently from how Scott does. He is chiefly concerned with getting policy right, which is certainly important, inasmuch as policy, since early 2008, has, for the most part, been disastrously wrong. One did not need a theoretically sophisticated model to see that the FOMC, out of misplaced concern that inflation expectations were becoming unanchored, kept money way too tight in 2008 in the face of rising food and energy prices, even as the economy was rapidly contracting in the second and third quarters. And in the wake of the contraction in the second and third quarters and a frightening collapse and panic in the fourth quarter, it did not take a sophisticated model to understand that rapid monetary expansion was called for. That’s why Scott writes the following:

All we really know is what Milton Friedman knew, with his partial equilibrium approach. Monetary policy drives nominal variables.  And cyclical fluctuations caused by nominal shocks seem sub-optimal.  Beyond that it’s all conjecture.

Ahem, and Marshall and Wicksell and Cassel and Fisher and Keynes and Hawtrey and Robertson and Hayek and at least 25 others that I could easily name. But it’s interesting to note that, despite his Marshallian (anti-Walrasian) proclivities, it was Friedman himself who started modern macroeconomics down the fruitless path it has been following for the last 40 years when he introduced the concept of the natural rate of unemployment in his famous 1968 AEA Presidential lecture on the role of monetary policy. Friedman defined the natural rate of unemployment as:

the level [of unemployment] that would be ground out by the Walrasian system of general equilibrium equations, provided there is embedded in them the actual structural characteristics of the labor and commodity markets, including market imperfections, stochastic variability in demands and supplies, the costs of gathering information about job vacancies, and labor availabilities, the costs of mobility, and so on.

Aside from the peculiar verb choice in describing the solution of an unknown variable contained in a system of equations, what is noteworthy about his definition is that Friedman was explicitly adopting a conception of an intertemporal general equilibrium as the unique and stable solution of that system of equations, and, whether he intended to or not, appeared to be suggesting that such a concept was operationally useful as a policy benchmark. Thus, despite Friedman’s own deep skepticism about the usefulness and relevance of general-equilibrium analysis, Friedman, for whatever reasons, chose to present his natural-rate argument in the language (however stilted on his part) of the Walrasian general-equilibrium theory for which he had little use and even less sympathy.

Inspired by the powerful policy conclusions that followed from the natural-rate hypothesis, Friedman’s direct and indirect followers, most notably Robert Lucas, used that analysis to transform macroeconomics, reducing macroeconomics to the manipulation of a simplified intertemporal general-equilibrium system. Under the assumption that all economic agents could correctly forecast all future prices (aka rational expectations), all agents could be viewed as intertemporal optimizers, any observed unemployment reflecting the optimizing choices of individuals to consume leisure or to engage in non-market production. I find it inconceivable that Friedman could have been pleased with the direction taken by the economics profession at large, and especially by his own department when he departed Chicago in 1977. This is pure conjecture on my part, but Friedman’s departure upon reaching retirement age might have had something to do with his own lack of sympathy with the direction that his own department had, under Lucas’s leadership, already taken. The problem was not so much with policy, but with the whole conception of what constitutes macroeconomic analysis.

The paper by Carlaw and Lipsey, which I referenced in my previous post, provides just one of many possible lines of attack against what modern macroeconomics has become. Without in any way suggesting that their criticisms are not weighty and serious, I would just point out that there really is no basis at all for assuming that the economy can be appropriately modeled as being in a continuous, or nearly continuous, state of general equilibrium. In the absence of a complete set of markets, the Arrow-Debreu conditions for the existence of a full intertemporal equilibrium are not satisfied, and there is no market mechanism that leads, even in principle, to a general equilibrium. The rational-expectations assumption is simply a deus-ex-machina method by which to solve a simplified model, a method with no real-world counterpart. And the suggestion that rational expectations is no more than the extension, let alone a logical consequence, of the standard rationality assumptions of basic economic theory is transparently bogus. Nor is there any basis for assuming that, if a general equilibrium does exist, it is unique, and that if it is unique, it is necessarily stable. In particular, in an economy with an incomplete (in the Arrow-Debreu sense) set of markets, an equilibrium may very much depend on the expectations of agents, expectations potentially even being self-fulfilling. We actually know that in many markets, especially those characterized by network effects, equilibria are expectation-dependent. Self-fulfilling expectations may thus be a characteristic property of modern economies, but they do not necessarily produce equilibrium.

An especially pretentious conceit of the modern macroeconomics of the last 40 years is that the extreme assumptions on which it rests are the essential microfoundations without which macroeconomics lacks any scientific standing. That’s preposterous. Perfect foresight and rational expectations are assumptions required for finding the solution to a system of equations describing a general equilibrium. They are not essential properties of a system consistent with the basic rationality propositions of microeconomics. To insist that a macroeconomic theory must correspond to the extreme assumptions necessary to prove the existence of a unique stable general equilibrium is to guarantee in advance the sterility and uselessness of that theory, because the entire field of study called macroeconomics is the result of long historical experience strongly suggesting that persistent, even cumulative, deviations from general equilibrium have been routine features of economic life since at least the early 19th century. That modern macroeconomics can tell a story in which apparently large deviations from general equilibrium are not really what they seem is not evidence that such deviations don’t exist; it merely shows that modern macroeconomics has constructed a language that allows the observed data to be classified in terms consistent with a theoretical paradigm that does not allow for lapses from equilibrium. That modern macroeconomics has constructed such a language is no reason why anyone not already committed to its underlying assumptions should feel compelled to accept its validity.

In fact, the standard comparative-statics propositions of microeconomics are also based on the assumption of the existence of a unique stable general equilibrium. Those comparative-statics propositions about the signs of the derivatives of various endogenous variables (price, quantity demanded, quantity supplied, etc.) with respect to various parameters of a microeconomic model involve comparisons between equilibrium values of the relevant variables before and after the posited parametric changes. All such comparative-statics results involve a ceteris-paribus assumption, conditional on the existence of a unique stable general equilibrium which serves as the starting and ending point (after adjustment to the parameter change) of the exercise, thereby isolating the purely hypothetical effect of a parameter change. Thus, as much as macroeconomics may require microfoundations, microeconomics is no less in need of macrofoundations, i.e., the existence of a unique stable general equilibrium, absent which a comparative-statics exercise would be meaningless, because the ceteris-paribus assumption could not otherwise be maintained. To assert that macroeconomics is impossible without microfoundations is therefore to reason in a circle, the empirically relevant propositions of microeconomics being predicated on the existence of a unique stable general equilibrium. But it is precisely the putative failure of a unique stable intertemporal general equilibrium to be attained, or to serve as a powerful attractor to economic variables, that provides the rationale for the existence of a field called macroeconomics.

So I certainly agree with Krugman that the present state of macroeconomics is pretty dismal. However, his own admitted willingness (and that of his New Keynesian colleagues) to adopt a theoretical paradigm that assumes the perpetual, or near-perpetual, existence of a unique stable intertemporal equilibrium, or at most admits the possibility of a very small set of deviations from such an equilibrium, means that, by his own admission, Krugman and his saltwater colleagues also bear a share of the responsibility for the very state of macroeconomics that Krugman now deplores.

About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 374 other followers


Get every new post delivered to your Inbox.

Join 374 other followers