Posts Tagged 'Nash Equilibrium'

A Tale of Two Syntheses

I recently finished reading a slender, but weighty, collection of essays, Microfoundtions Reconsidered: The Relationship of Micro and Macroeconomics in Historical Perspective, edited by Pedro Duarte and Gilberto Lima; it contains in addition to a brief introductory essay by the editors, and contributions by Kevin Hoover, Robert Leonard, Wade Hands, Phil Mirowski, Michel De Vroey, and Pedro Duarte. The volume is both informative and stimulating, helping me to crystalize ideas about which I have been ruminating and writing for a long time, but especially in some of my more recent posts (e.g., here, here, and here) and my recent paper “Hayek, Hicks, Radner and Four Equilibrium Concepts.”

Hoover’s essay provides a historical account of the microfoundations, making clear that the search for microfoundations long preceded the Lucasian microfoundations movement of the 1970s and 1980s that would revolutionize macroeconomics in the late 1980s and early 1990s. I have been writing about the differences between varieties of microfoundations for quite a while (here and here), and Hoover provides valuable detail about early discussions of microfoundations and about their relationship to the now regnant Lucasian microfoundations dogma. But for my purposes here, Hoover’s key contribution is his deconstruction of the concept of microfoundations, showing that the idea of microfoundations depends crucially on the notion that agents in a macroeconomic model be explicit optimizers, meaning that they maximize an explicit function subject to explicit constraints.

What Hoover clarifies is vacuity of the Lucasian optimization dogma. Until Lucas, optimization by agents had been merely a necessary condition for a model to be microfounded. But there was also another condition: that the optimizing choices of agents be mutually consistent. Establishing that the optimizing choices of agents are mutually consistent is not necessarily easy or even possible, so often the consistency of optimizing plans can only be suggested by some sort of heuristic argument. But Lucas and his cohorts, followed by their acolytes, unable to explain, even informally or heuristically, how the optimizing choices of individual agents are rendered mutually consistent, instead resorted to question-begging and question-dodging techniques to avoid addressing the consistency issue, of which one — the most egregious, but not the only — is the representative agent. In so doing, Lucas et al. transformed the optimization problem from the coordination of multiple independent choices into the optimal plan of a single decision maker. Heckuva job!

The second essay by Robert Leonard, though not directly addressing the question of microfoundations, helps clarify and underscore the misrepresentation perpetrated by the Lucasian microfoundational dogma in disregarding and evading the need to describe a mechanism whereby the optimal choices of individual agents are, or could be, reconciled. Leonard focuses on a particular economist, Oskar Morgenstern, who began his career in Vienna as a not untypical adherent of the Austrian school of economics, a member of the Mises seminar and successor of F. A. Hayek as director of the Austrian Institute for Business Cycle Research upon Hayek’s 1931 departure to take a position at the London School of Economics. However, Morgenstern soon began to question the economic orthodoxy of neoclassical economic theory and its emphasis on the tendency of economic forces to reach a state of equilibrium.

In his famous early critique of the foundations of equilibrium theory, Morgenstern tried to show that the concept of perfect foresight, upon which, he alleged, the concept of equilibrium rests, is incoherent. To do so, Morgenstern used the example of the Holmes-Moriarity interaction in which Holmes and Moriarty are caught in a dilemma in which neither can predict whether the other will get off or stay on the train on which they are both passengers, because the optimal choice of each depends on the choice of the other. The unresolvable conflict between Holmes and Moriarty, in Morgenstern’s view, showed that the incoherence of the idea of perfect foresight.

As his disillusionment with orthodox economic theory deepened, Morgenstern became increasingly interested in the potential of mathematics to serve as a tool of economic analysis. Through his acquaintance with the mathematician Karl Menger, the son of Carl Menger, founder of the Austrian School of economics. Morgenstern became close to Menger’s student, Abraham Wald, a pure mathematician of exceptional ability, who, to support himself, was working on statistical and mathematical problems for the Austrian Institute for Business Cycle Resarch, and tutoring Morgenstern in mathematics and its applications to economic theory. Wald, himself, went on to make seminal contributions to mathematical economics and statistical analysis.

Moregenstern also became acquainted with another student of Menger, John von Neumnn, with an interest in applying advanced mathematics to economic theory. Von Neumann and Morgenstern would later collaborate in writing The Theory of Games and Economic Behavior, as a result of which Morgenstern came to reconsider his early view of the Holmes-Moriarty paradox inasmuch as it could be shown that an equilibrium solution of their interaction could be found if payoffs to their joint choices were specified, thereby enabling Holmes and Moriarty to choose optimal probablistic strategies.

I don’t think that the game-theoretic solution to the Holmes Moriarty game is as straightforward as Morgenstern eventually agreed, but the critical point in the microfoundations discussion is that the mathematical solution to the Holmes-Moriarty paradox acknowledges the necessity for the choices made by two or more agents in an economic or game-theoretic equilibrium to be reconciled – i.e., rendered mutually consistent — in equilibrium. Under Lucasian microfoundations dogma, the problem is either annihilated by positing an optimizing representative agent having no need to coordinate his decision with other agents (I leave the question who, in the Holmes-Moriarty interaction, is the representative agent as an exercise for the reader) or it is assumed away by positing the existence of a magical equilibrium with no explanation of how the mutually consistent choices are arrived at.

The third essay (“The Rise and Fall of Walrasian Economics: The Keynes Effect”) by Wade Hands considers the first of the two syntheses – the neoclassical synthesis — that are alluded to in the title of this post. Hands gives a learned account of the mutually reinforcing co-development of Walrasian general equilibrium theory and Keynesian economics in the 25 years or so following World War II. Although Hands agrees that there is no necessary connection between Walrasian GE theory and Keynesian theory, he argues that there was enough common ground between Keynesians and Walrasians, as famously explained by Hicks in summarizing Keynesian theory by way of his IS-LM model, to allow the two disparate research programs to nourish each other in a kind of symbiotic relationship as the two research programs came to dominate postwar economics.

The task for Keynesian macroeconomists following the lead of Samuelson, Solow and Modigliani at MIT, Alvin Hansen at Harvard and James Tobin at Yale was to elaborate the Hicksian IS-LM approach by embedding it in a more general Walrasian framework. In so doing, they helped to shape a research agenda for Walrasian general-equilibrium theorists working out the details of the newly developed Arrow-Debreu model, deriving conditions for the uniqueness and stability of the equilibrium of that model. The neoclassical synthesis followed from those efforts, achieving an uneasy reconciliation between Walrasian general equilibrium theory and Keynesian theory. It received its most complete articulation in the impressive treatise of Don Patinkin which attempted to derive or at least evaluate key Keyensian propositions in the context of a full general equilibrium model. At an even higher level of theoretical sophistication, the 1971 summation of general equilibrium theory by Arrow and Hahn, gave disproportionate attention to Keynesian ideas which were presented and analyzed using the tools of state-of-the art Walrasian analysis.

Hands sums up the coexistence of Walrasian and Keynesian ideas in the Arrow-Hahn volume as follows:

Arrow and Hahn’s General Competitive Analysis – the canonical summary of the literature – dedicated far more pages to stability than to any other topic. The book had fourteen chapters (and a number of mathematical appendices); there was one chapter on consumer choice, one chapter on production theory, and one chapter on existence [of equilibrium], but there were three chapters on stability analysis, (two on the traditional tatonnement and one on alternative ways of modeling general equilibrium dynamics). Add to this the fact that there was an important chapter on “The Keynesian Model’; and it becomes clear how important stability analysis and its connection to Keynesian economics was for Walrasian microeconomics during this period. The purpose of this section has been to show that that would not have been the case if the Walrasian economics of the day had not been a product of co-evolution with Keynesian economic theory. (p. 108)

What seems most unfortunate about the neoclassical synthesis is that it elevated and reinforced the least relevant and least fruitful features of both the Walrasian and the Keynesian research programs. The Hicksian IS-LM setup abstracted from the dynamic and forward-looking aspects of Keynesian theory, modeling a static one-period model, not easily deployed as a tool of dynamic analysis. Walrasian GE analysis, which, following the pathbreaking GE existence proofs of Arrow and Debreu, then proceeded to a disappointing search for the conditions for a unique and stable general equilibrium.

It was Paul Samuelson who, building on Hicks’s pioneering foray into stability analysis, argued that the stability question could be answered by investigating whether a system of Lyapounov differential equations could describe market price adjustments as functions of market excess demands that would converge on an equilibrium price vector. But Samuelson’s approach to establishing stability required the mechanism of a fictional tatonnement process. Even with that unsatisfactory assumption, the stability results were disappointing.

Although for Walrasian theorists the results hardly repaid the effort expended, for those Keynesians who interpreted Keynes as an instability theorist, the weak Walrasian stability results might have been viewed as encouraging. But that was not any easy route to take either, because Keynes had also argued that a persistent unemployment equilibrium might be the norm.

It’s also hard to understand how the stability of equilibrium in an imaginary tatonnement process could ever have been considered relevant to the operation of an actual economy in real time – a leap of faith almost as extraordinary as imagining an economy represented by a single agent. Any conventional comparative-statics exercise – the bread and butter of microeconomic analysis – involves comparing two equilibria, corresponding to a specified parametric change in the conditions of the economy. The comparison presumes that, starting from an equilibrium position, the parametric change leads from an initial to a new equilibrium. If the economy isn’t stable, a disturbance causing an economy to depart from an initial equilibrium need not result in an adjustment to a new equilibrium comparable to the old one.

If conventional comparative statics hinges on an implicit stability assumption, it’s hard to see how a stability analysis of tatonnement has any bearing on the comparative-statics routinely relied upon by economists. No actual economy ever adjusts to a parametric change by way of tatonnement. Whether a parametric change displacing an economy from its equilibrium time path would lead the economy toward another equilibrium time path is another interesting and relevant question, but it’s difficult to see what insight would be gained by proving the stability of equilibrium under a tatonnement process.

Moreover, there is a distinct question about the endogenous stability of an economy: are there endogenous tendencies within an economy that lead it away from its equilibrium time path. But questions of endogenous stability can only be posed in a dynamic, rather than a static, model. While extending the Walrasian model to include an infinity of time periods, Arrow and Debreu telescoped determination of the intertemporal-equilibrium price vector into a preliminary time period before time, production, exchange and consumption begin. So, even in the formally intertemporal Arrow-Debreu model, the equilibrium price vector, once determined, is fixed and not subject to revision. Standard stability analysis was concerned with the response over time to changing circumstances only insofar as changes are foreseen at time zero, before time begins, so that they can be and are taken fully into account when the equilibrium price vector is determined.

Though not entirely uninteresting, the intertemporal analysis had little relevance to the stability of an actual economy operating in real time. Thus, neither the standard Keyensian (IS-LM) model nor the standard Walrasian Arrow-Debreu model provided an intertemporal framework within which to address the dynamic stability that Keynes (and contemporaries like Hayek, Myrdal, Lindahl and Hicks) had developed in the 1930s. In particular, Hicks’s analytical device of temporary equilibrium might have facilitated such an analysis. But, having introduced his IS-LM model two years before publishing his temporary equilibrium analysis in Value and Capital, Hicks concentrated his attention primarily on Keynesian analysis and did not return to the temporary equilibrium model until 1965 in Capital and Growth. And it was IS-LM that became, for a generation or two, the preferred analytical framework for macroeconomic analysis, while temproary equilibrium remained overlooked until the 1970s just as the neoclassical synthesis started coming apart.

The fourth essay by Phil Mirowski investigates the role of the Cowles Commission, based at the University of Chicago from 1939 to 1955, in undermining Keynesian macroeconomics. While Hands argues that Walrasians and Keynesians came together in a non-hostile spirit of tacit cooperation, Mirowski believes that owing to their Walrasian sympathies, the Cowles Committee had an implicit anti-Keynesian orientation and was therefore at best unsympathetic if not overtly hostile to Keynesian theorizing, which was incompatible the Walrasian optimization paradigm endorsed by the Cowles economists. (Another layer of unexplored complexity is the tension between the Walrasianism of the Cowles economists and the Marshallianism of the Chicago School economists, especially Knight and Friedman, which made Chicago an inhospitable home for the Cowles Commission and led to its eventual departure to Yale.)

Whatever differences, both the Mirowski and the Hands essays support the conclusion that the uneasy relationship between Walrasianism and Keynesianism was inherently problematic and unltimately unsustainable. But to me the tragedy is that before the fall, in the 1950s and 1960s, when the neoclassical synthesis bestrode economics like a colossus, the static orientation of both the Walrasian and the Keynesian research programs combined to distract economists from a more promising research program. Such a program, instead of treating expectations either as parametric constants or as merely adaptive, based on an assumed distributed lag function, might have considered whether expectations could perform a potentially equilibrating role in a general equilibrium model.

The equilibrating role of expectations, though implicit in various contributions by Hayek, Myrdal, Lindahl, Irving Fisher, and even Keynes, is contingent so that equilibrium is not inevitable, only a possibility. Instead, the introduction of expectations as an equilibrating variable did not occur until the mid-1970s when Robert Lucas, Tom Sargent and Neil Wallace, borrowing from John Muth’s work in applied microeconomics, introduced the idea of rational expectations into macroeconomics. But in introducing rational expectations, Lucas et al. made rational expectations not the condition of a contingent equilibrium but an indisputable postulate guaranteeing the realization of equilibrium without offering any theoretical account of a mechanism whereby the rationality of expectations is achieved.

The fifth essay by Michel DeVroey (“Microfoundations: a decisive dividing line between Keynesian and new classical macroeconomics?”) is a philosophically sophisticated analysis of Lucasian microfoundations methodological principles. DeVroey begins by crediting Lucas with the revolution in macroeconomics that displaced a Keynesian orthodoxy already discredited in the eyes of many economists after its failure to account for simultaneously rising inflation and unemployment.

The apparent theoretical disorder characterizing the Keynesian orthodoxy and its Monetarist opposition left a void for Lucas to fill by providing a seemingly rigorous microfounded alternative to the confused state of macroeconomics. And microfoundations became the methodological weapon by which Lucas and his associates and followers imposed an iron discipline on the unruly community of macroeconomists. “In Lucas’s eyes,” DeVroey aptly writes,“ the mere intention to produce a theory of involuntary unemployment constitutes an infringement of the equilibrium discipline.” Showing that his description of Lucas is hardly overstated, DeVroey quotes from the famous 1978 joint declaration of war issued by Lucas and Sargent against Keynesian macroeconomics:

After freeing himself of the straightjacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear – which for the labor market seemed patently contradicted by the severity of business depressions – Keynes took as an unexamined postulate that money wages are sticky, meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze.

Echoing Keynes’s famous description of the sway of Ricardian doctrines over England in the nineteenth century, DeVroey remarks that the microfoundations requirement “conquered macroeconomics as quickly and thoroughly as the Holy Inquisition conquered Spain,” noting, even more tellingly, that the conquest was achieved without providing any justification. Ricardo had, at least, provided a substantive analysis that could be debated; Lucas offered only an undisputable methodological imperative about the sole acceptable mode of macroeconomic reasoning. Just as optimization is a necessary component of the equilibrium discipline that had to be ruthlessly imposed on pain of excommunication from the macroeconomic community, so, too, did the correlate principle of market-clearing. To deviate from the market-clearing postulate was ipso facto evidence of an impure and heretical state of mind. DeVroey further quotes from the war declaration of Lucas and Sargent.

Cleared markets is simply a principle, not verifiable by direct observation, which may or may not be useful in constructing successful hypotheses about the behavior of these [time] series.

What was only implicit in the war declaration became evident later after right-thinking was enforced, and woe unto him that dared deviate from the right way of thinking.

But, as DeVroey skillfully shows, what is most remarkable is that, having declared market clearing an indisputable methodological principle, Lucas, contrary to his own demand for theoretical discipline, used the market-clearing postulate to free himself from the very equilibrium discipline he claimed to be imposing. How did the market-clearing postulate liberate Lucas from equilibrium discipline? To show how the sleight-of-hand was accomplished, DeVroey, in an argument parallel to that of Hoover in chapter one and that suggested by Leonard in chapter two, contrasts Lucas’s conception of microfoundations with a different microfoundations conception espoused by Hayek and Patinkin. Unlike Lucas, Hayek and Patinkin recognized that the optimization of individual economic agents is conditional on the optimization of other agents. Lucas assumes that if all agents optimize, then their individual optimization ensures that a social optimum is achieved, the whole being the sum of its parts. But that assumption ignores that the choices made interacting agents are themelves interdependent.

To capture the distinction between independent and interdependent optimization, DeVroey distinguishes between optimal plans and optimal behavior. Behavior is optimal only if an optimal plan can be executed. All agents can optimize individually in making their plans, but the optimality of their behavior depends on their capacity to carry those plans out. And the capacity of each to carry out his plan is contingent on the optimal choices of all other agents.

Optimizing plans refers to agents’ intentions before the opening of trading, the solution to the choice-theoretical problem with which they are faced. Optimizing behavior refers to what is observable after trading has started. Thus optimal behavior implies that the optimal plan has been realized. . . . [O]ptmizing plans and optimizing behavior need to be logically separated – there is a difference between finding a solution to a choice problem and implementing the solution. In contrast, whenever optimizing behavior is the sole concept used, the possibility of there being a difference between them is discarded by definition. This is the standpoint takenby Lucas and Sargent. Once it is adopted, it becomes misleading to claim . . .that the microfoundations requirement is based on two criteria, optimizing behavior and market clearing. A single criterion is needed, and it is irrelevant whether this is called generalized optimizing behavior or market clearing. (De Vroey, p. 176)

Each agent is free to optimize his plan, but no agent can execute his optimal plan unless the plan coincides with the complementary plans of other agents. So, the execution of an optimal plan is not within the unilateral control of an agent formulating his own plan. One can readily assume that agents optimize their plans, but one cannot just assume that those plans can be executed as planned. The optimality of interdependent plans is not self-evident; it is a proposition that must be demonstrated. Assuming that agents optimize, Lucas simply asserts that, because agents optimize, markets must clear.

That is a remarkable non-sequitur. And from that non-sequitur, Lucas jumps to a further non-sequitur: that an optimizing representative agent is all that’s required for a macroeconomic model. The logical straightjacket (or discipline) of demonstrating that interdependent optimal plans are consistent is thus discarded (or trampled upon). Lucas’s insistence on a market-clearing principle turns out to be subterfuge by which the pretense of its upholding conceals its violation in practice.

My own view is that the assumption that agents formulate optimizing plans cannot be maintained without further analysis unless the agents are operating in isolation. If the agents interacting with each other, the assumption that they optimize requires a theory of their interaction. If the focus is on equilibrium interactions, then one can have a theory of equilibrium, but then the possibility of non-equilibrium states must also be acknowledged.

That is what John Nash did in developing his equilibrium theory of positive-sum games. He defined conditions for the existence of equilibrium, but he offered no theory of how equilibrium is achieved. Lacking such a theory, he acknowledged that non-equilibrium solutions might occur, e.g., in some variant of the Holmes-Moriarty game. To simply assert that because interdependent agents try to optimize, they must, as a matter of principle, succeed in optimizing is to engage in question-begging on a truly grand scale. To insist, as a matter of methodological principle, that everyone else must also engage in question-begging on equally grand scale is what I have previously called methodological arrogance, though an even harsher description might be appropriate.

In the sixth essay (“Not Going Away: Microfoundations in the making of a new consensus in macroeconomics”), Pedro Duarte considers the current state of apparent macroeconomic consensus in the wake of the sweeping triumph of the Lucasian micorfoundtions methodological imperative. In its current state, mainstream macroeconomists from a variety of backgrounds have reconciled themselves and adjusted to the methodological absolutism Lucas and his associates and followers have imposed on macroeconomic theorizing. Leading proponents of the current consensus are pleased to announce, in unseemly self-satisfaction, that macroeconomics is now – but presumably not previously – “firmly grounded in the principles of economic [presumably neoclassical] theory.” But the underlying conception of neoclassical economic theory motivating such a statement is almost laughably narrow, and, as I have just shown, strictly false even if, for argument’s sake, that narrow conception is accepted.

Duarte provides an informative historical account of the process whereby most mainstream Keynesians and former old-line Monetarists, who had, in fact, adopted much of the underlying Keynesian theoretical framework themselves, became reconciled to the non-negotiable methodological microfoundational demands upon which Lucas and his New Classical followers and Real-Business-Cycle fellow-travelers insisted. While Lucas was willing to tolerate differences of opinion about the importance of monetary factors in accounting for business-cycle fluctuations in real output and employment, and even willing to countenance a role for countercyclical monetary policy, such differences of opinion could be tolerated only if they could be derived from an acceptable microfounded model in which the agent(s) form rational expectations. If New Keynesians were able to produce results rationalizing countercyclical policies in such microfounded models with rational expectations, Lucas was satisfied. Presumably, Lucas felt the price of conceding the theoretical legitimacy of countercyclical policy was worth paying in order to achieve methodological hegemony over macroeconomic theory.

And no doubt, for Lucas, the price was worth paying, because it led to what Marvin Goodfriend and Robert King called the New Neoclassical Synthesis in their 1997 article ushering in the new era of good feelings, a synthesis based on “the systematic application of intertemporal optimization and rational expectations” while embodying “the insights of monetarists . . . regarding the theory and practice of monetary policy.”

While the first synthesis brought about a convergence of sorts between the disparate Walrasian and Keynesian theoretical frameworks, the convergence proved unstable because the inherent theoretical weaknesses of both paradigms were unable to withstand criticisms of the theoretical apparatus and of the policy recommendations emerging from that synthesis, particularly an inability to provide a straightforward analysis of inflation when it became a serious policy problem in the late 1960s and 1970s. But neither the Keynesian nor the Walrasian paradigms were developing in a way that addressed the points of most serious weakness.

On the Keynesian side, the defects included the static nature of the workhorse IS-LM model, the absence of a market for real capital and of a market for endogenous money. On the Walrasian side, the defects were the lack of any theory of actual price determination or of dynamic adjustment. The Hicksian temporary equilibrium paradigm might have provided a viable way forward, and for a very different kind of synthesis, but not even Hicks himself realized the potential of his own creation.

While the first synthesis was a product of convenience and misplaced optimism, the second synthesis is a product of methodological hubris and misplaced complacency derived from an elementary misunderstanding of the distinction between optimization by a single agent and the simultaneous optimization of two or more independent, yet interdependent, agents. The equilibrium of each is the result of the equilibrium of all, and a theory of optimization involving two or more agents requires a theory of how two or more interdependent agents can optimize simultaneously. The New neoclassical synthesis rests on the demand for a macroeconomic theory of individual optimization that refuses even to ask, let along provide an answer to, the question whether the optimization that it demands is actually achieved in practice or what happens if it is not. This is not a synthesis that will last, or that deserves to. And the sooner it collapses, the better off macroeconomics will be.

What the answer is I don’t know, but if I had to offer a suggestion, the one offered by my teacher Axel Leijonhufvud towards the end of his great book, written more than half a century ago, strikes me as not bad at all:

One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tool, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation, form the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first-step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary-exchange-cum-production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some steps of past developments in order to get on the right track—and that is probably advisable—my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound than Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (p. 401)

I agree with all that, but would also recommend Roy Radner’s development of an alternative to the Arrow-Debreu version of Walrasian general equilibrium theory that can accommodate Hicksian temporary equilibrium, and Hawtrey’s important contributions to our understanding of monetary theory and the role and potential instability of endogenous bank money. On top of that, Franklin Fisher in his important work, The Disequilibrium Foundations of Equilibrium Economics, has given us further valuable guidance in how to improve the current sorry state of macroeconomics.

 

A Primer on Equilibrium

After my latest post about rational expectations, Henry from Australia, one of my most prolific commenters, has been engaging me in a conversation about what assumptions are made – or need to be made – for an economic model to have a solution and for that solution to be characterized as an equilibrium, and in particular, a general equilibrium. Equilibrium in economics is not always a clearly defined concept, and it can have a number of different meanings depending on the properties of a given model. But the usual understanding is that the agents in the model (as consumers or producers) are trying to do as well for themselves as they can, given the endowments of resources, skills and technology at their disposal and given their preferences. The conversation was triggered by my assertion that rational expectations must be “compatible with the equilibrium of the model in which those expectations are embedded.”

That was the key insight of John Muth in his paper introducing the rational-expectations assumption into economic modelling. So in any model in which the current and future actions of individuals depend on their expectations of the future, the model cannot arrive at an equilibrium unless those expectations are consistent with the equilibrium of the model. If the expectations of agents are incompatible or inconsistent with the equilibrium of the model, then, since the actions taken or plans made by agents are based on those expectations, the model cannot have an equilibrium solution.

Now Henry thinks that this reasoning is circular. My argument would be circular if I defined an equilibrium to be the same thing as correct expectations. But I am not so defining an equilibrium. I am saying that the correctness of expectations by all agents implies 1) that their expectations are mutually consistent, and 2) that, having made plans, based on their expectations, which, by assumption, agents felt were the best set of choices available to them given those expectations, if the expectations of the agents are realized, then they would not regret the decisions and the choices that they made. Each agent would be as well off as he could have made himself, given his perceived opportunities when the decision were made. That the correctness of expectations implies equilibrium is the consequence of assuming that agents are trying to optimize their decision-making process, given their available and expected opportunities. If all expected opportunities are correctly foreseen, then all decisions will have been the optimal decisions under the circumstances. But nothing has been said that requires all expectations to be correct, or even that it is possible for all expectations to be correct. If an equilibrium does not exist, and just because you can write down an economic model, it does not mean that a solution to the model exists, then the sweet spot where all expectations are consistent and compatible is just a blissful fantasy. So a logical precondition to showing that rational expectations are even possible is to prove that an equilibrium exists. There is nothing circular about the argument.

Now the key to proving the existence of a general equilibrium is to show that the general equilibrium model implies the existence of what mathematicians call a fixed point. A fixed point is said to exist when there is a mapping – a rule or a function – that takes every point in a convex compact set of points and assigns that point to another point in the same set. A convex, compact set has two important properties: 1) the line connecting any two points in the set is entirely contained within the boundaries of the set, and 2) there are no gaps between any two points in set. The set of points in a circle or a rectangle is a convex compact set; the set of points contained in the Star of David is not a convex set. Any two points in the circle will be connected by a line that lies completely within the circle; the points at adjacent edges of a Star of David will be connected by a line that lies entirely outside the Star of David.

If you think of the set of all possible price vectors for an economy, those vectors – each containing a price for each good or service in the economy – could be mapped onto itself in the following way. Given all the equations describing the behavior of each agent in the economy, the quantity demanded and supplied of each good could be calculated, giving us the excess demand (the difference between amount demand and supplied) for each good. Then the price of every good in excess demand would be raised, the price of every good in negative excess demand would be reduced, and the price of every good with zero excess demand would be held constant. To ensure that the mapping was taking a point from a given convex set onto itself, all prices could be normalized so that they would have the property that the sum of all the individual prices would always equal 1. The fixed point theorem ensures that for a mapping from one convex compact set onto itself there must be at least one fixed point, i.e., at least one point in the set that gets mapped onto itself. The price vector corresponding to that point is an equilibrium, because, given how our mapping rule was defined, a point would be mapped onto itself if and only if all excess demands are zero, so that no prices changed. Every fixed point – and there may be one or more fixed points – corresponds to an equilibrium price vector and every equilibrium price vector is associated with a fixed point.

Before going on, I ought to make an important observation that is often ignored. The mathematical proof of the existence of an equilibrium doesn’t prove that the economy operates at an equilibrium, or even that the equilibrium could be identified under the mapping rule described (which is a kind of formalization of the Walrasian tatonnement process). The mapping rule doesn’t guarantee that you would ever discover a fixed point in any finite amount of iterations. Walras thought the price adjustment rule of raising the prices of goods in excess demand and reducing prices of goods in excess supply would converge on the equilibrium price vector. But the conditions under which you can prove that the naïve price-adjustment rule converges to an equilibrium price vector turn out to be very restrictive, so even though we can prove that the competitive model has an equilibrium solution – in other words the behavioral, structural and technological assumptions of the model are coherent, meaning that the model has a solution, the model has no assumptions about how prices are actually determined that would prove that the equilibrium is ever reached. In fact, the problem is even more daunting than the previous sentence suggest, because even Walrasian tatonnement imposes an incredibly powerful restriction, namely that no trading is allowed at non-equilibrium prices. In practice there are almost never recontracting provisions allowing traders to revise the terms of their trades once it becomes clear that the prices at which trades were made were not equilibrium prices.

I now want to show how price expectations fit into all of this, because the original general equilibrium models were either one-period models or formal intertemporal models that were reduced to single-period models by assuming that all trading for future delivery was undertaken in the first period by long-lived agents who would eventually carry out the transactions that were contracted in period 1 for subsequent consumption and production. Time was preserved in a purely formal, technical way, but all economic decision-making was actually concluded in the first period. But even though the early general-equilibrium models did not encompass expectations, one of the extraordinary precursors of modern economics, Augustin Cournot, who was way too advanced for his contemporaries even to comprehend, much less make any use of, what he was saying, had incorporated the idea of expectations into the solution of his famous economic model of oligopolistic price setting.

The key to oligopolistic pricing is that each oligopolist must take into account not just consumer demand for his product, and his own production costs; he must consider as well what actions will be taken by his rivals. This is not a problem for a competitive producer (a price-taker) or a pure monopolist. The price-taker simply compares the price at which he can sell as much as he wants with his production costs and decides how much it is worthwhile to produce by comparing his marginal cost to price ,and increases output until the marginal cost rises to match the price at which he can sell. The pure monopolist, if he knows, as is assumed in such exercises, or thinks he knows the shape of the customer demand curve, selects the price and quantity combination on the demand curve that maximizes total profit (corresponding to the equality of marginal revenue and marginal cost). In oligopolistic situations, each producer must take into account how much his rivals will sell, or what prices they will set.

It was by positing such a situation and finding an analytic solution, that Cournot made a stunning intellectual breakthrough. In the simple duopoly case, Cournot posited that if the duopolists had identical costs, then each could find his optimal price conditional on the output chosen by the other. This is a simple profit-maximization problem for each duopolist, given a demand curve for the combined output of both (assumed to be identical, so that a single price must obtain for the output of both) a cost curve and the output of the other duopolist. Thus, for each duopolist there is a reaction curve showing his optimal output given the output of the other. See the accompanying figure.cournot

If one duopolist produces zero, the optimal output for the other is the monopoly output. Depending on what the level of marginal cost is, there is some output by either of the duopolists that is sufficient to make it unprofitable for the other duopolist to produce anything. That level of output corresponds to the competitive output where price just equals marginal cost. So the slope of the two reaction functions corresponds to the ratio of the monopoly output to the competitive output, which, with constant marginal cost is 2:1. Given identical costs, the two reaction curves are symmetric and the optimal output for each, given the expected output of the other, corresponds to the intersection of the two reaction curves, at which both duopolists produce the same quantity. The combined output of the two duopolists will be greater than the monopoly output, but less than the competitive output at which price equals marginal cost. With constant marginal cost, it turns out that each duopolist produces one-third of the competitive output. In the general case with n oligoplists, the ratio of the combined output of all n firms to the competitive output equals n/(n+1).

Cournot’s solution corresponds to a fixed point where the equilibrium of the model implies that both duopolists have correct expectations of the output of the other. Given the assumptions of the model, if the duopolists both expect the other to produce an output equal to one-third of the competitive output, their expectations will be consistent and will be realized. If either one expects the other to produce a different output, the outcome will not be an equilibrium, and each duopolist will regret his output decision, because the price at which he can sell his output will differ from the price that he had expected. In the Cournot case, you could define a mapping of a vector of the quantities that each duopolist had expected the other to produce and the corresponding planned output of each duopolist. An equilibrium corresponds to a case in which both duopolists expected the output planned by the other. If either duopolist expected a different output from what the other planned, the outcome would not be an equilibrium.

We can now recognize that Cournot’s solution anticipated John Nash’s concept of an equilibrium strategy in which player chooses a strategy that is optimal given his expectation of what the other player’s strategy will be. A Nash equilibrium corresponds to a fixed point in which each player chooses an optimal strategy based on the correct expectation of what the other player’s strategy will be. There may be more than one Nash equilibrium in many games. For example, rather than base their decisions on an expectation of the quantity choice of the other duopolist, the two duopolists could base their decisions on an expectation of what price the other duopolist would set. In the constant-cost case, this choice of strategies would lead to the competitive output because both duopolists would conclude that the optimal strategy of the other duopolist would be to charge a price just sufficient to cover his marginal cost. This was the alternative oligopoly model suggested by another French economist J. L. F. Bertrand. Of course there is a lot more to be said about how oligopolists strategize than just these two models, and the conditions under which one or the other model is the more appropriate. I just want to observe that assumptions about expectations are crucial to how we analyze market equilibrium, and that the importance of these assumptions for understanding market behavior has been recognized for a very long time.

But from a macroeconomic perspective, the important point is that expected prices become the critical equilibrating variable in the theory of general equilibrium and in macroeconomics in general. Single-period models of equilibrium, including general-equilibrium models that are formally intertemporal, but in which all trades are executed in the initial period at known prices in a complete array of markets determining all future economic activity, are completely sterile and useless for macroeconomics except as a stepping stone to analyzing the implications of imperfect forecasts of future prices. If we want to think about general equilibrium in a useful macroeconomic context, we have to think about a general-equilibrium system in which agents make plans about consumption and production over time based on only the vaguest conjectures about what future conditions will be like when the various interconnected stages of their plans will be executed.

Unlike the full Arrow-Debreu system of complete markets, a general-equilibrium system with incomplete markets cannot be equilibrated, even in principle, by price adjustments in the incomplete set of present markets. Equilibration depends on the consistency of expected prices with equilibrium. If equilibrium is characterized by a fixed point, the fixed point must be mapping of a set of vectors of current prices and expected prices on to itself. That means that expected future prices are as much equilibrating variables as current market prices. But expected future prices exist only in the minds of the agents, they are not directly subject to change by market forces in the way that prices in actual markets are. If the equilibrating tendencies of market prices in a system of complete markets are very far from completely effective, the equilibrating tendencies of expected future prices may not only be non-existent, but may even be potentially disequilibrating rather than equilibrating.

The problem of price expectations in an intertemporal general-equilibrium system is central to the understanding of macroeconomics. Hayek, who was the father of intertemporal equilibrium theory, which he was the first to outline in a 1928 paper in German, and who explained the problem with unsurpassed clarity in his 1937 paper “Economics and Knowledge,” unfortunately did not seem to acknowledge its radical consequences for macroeconomic theory, and the potential ineffectiveness of self-equilibrating market forces. My quarrel with rational expectations as a strategy of macroeconomic analysis is its implicit assumption, lacking any analytical support, that prices and price expectations somehow always adjust to equilibrium values. In certain contexts, when there is no apparent basis to question whether a particular market is functioning efficiently, rational expectations may be a reasonable working assumption for modelling observed behavior. However, when there is reason to question whether a given market is operating efficiently or whether an entire economy is operating close to its potential, to insist on principle that the rational-expectations assumption must be made, to assume, in other words, that actual and expected prices adjust rapidly to their equilibrium values allowing an economy to operate at or near its optimal growth path, is simply, as I have often said, an exercise in circular reasoning and question begging.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,260 other subscribers
Follow Uneasy Money on WordPress.com